Alphanumeric Sorting - sorting

What is the best/fastest way to sort Alphanumeric fields?

The answer to your question is intimately related to some details you haven't provided. The "best/fastest" way depends on how long the fields are, how many you have to sort, how much memory you have available, the relative speeds of disk and memory, the details of what's in the strings, ..., ad nauseam.
Knuth Vol 3 has the details on a wide variety of approaches. I don't recall if he discusses Radix Sorting, but he probably does. If he doesn't, you should look up some references on Radix Sorting. It's only useful in a narrow set of circumstances, but positively flies there. If you've got a small set of short strings, Bubble Sort will perform better than complex sorts on some architectures, due to lower overhead. The C Run Time Library includes a version of Quick Sort because that can be a very efficient algorithm for larger data sets in some circumstances.
Net-net, the answer is "It depends".

The "best" way depends on a lot of factors:
Do you need to support more than language?
Do you need to support more than one language simultaniously?
Do you need to support languages other than the current Operating System or user language? (ex, web applications)
Do you need to support more than one encoding? (unicode, utf-16le/utf-8, ansi code pages, etc)
Do you need to support long or highly redundant inputs? (where precomputation or compression may speed up sorting operations)
Do you need to support a large number of inputs, ex: million, or billion inputs?

Bubble sort! Just kidding :)
Probably your best bet would be quicksort or mergesort.
Both are O(nlogn) as opposed to bubble sort's O(n^2)

You don't specify your target language, but whatever it is, it should have reliable, built-in sorting methods, so use one of them! For PHP...
Load into an array and sort($array);
php sort...
$fruits = array("lemon", "orange", "banana", "apple");
sort($fruits);
foreach ($fruits as $key => $val)
{
echo "fruits[" . $key . "] = " . $val . "\n";
}
Output:
fruits[0] = apple
fruits[1] = banana
fruits[2] = lemon
fruits[3] = orange

You will find that most development libraries ship with an implementation of the quicksort algorithm, which is often the fastest sorting algorithm. Check out the Wikipedia link here.

In C#, List has .Sort().
In general QuickSort is very fast on many situations but it always depend of the size of the array,
Here is the link

Related

When would someone ever use selection sort?

If there are so many faster and more efficient sorting algorithms available (merge sort, heap sort, quick sort), why is selection sort still taught? If it is because they are still used, when are some examples where this would be true?
I believe it's still taught because it's a simple algorithm to understand and helps build the foundation for other sorting algorithms. It's also an easy exercise in understanding time and space complexity for algorithms. Not aware of any practical usages in modern computing, but it does have very low memory overhead so can be ideal for situations where memory is at a premium.
Personally, Selection Sort only exists as a teaching process, besides that, I wouldn't see any other reason to use it.
If gives you a great understanding on Big-O and it's effective to compare selection sort to quick sort/merge sort/heap sort so that you can actually experience the difference in run-time
In short. Selection sort is used for educational purposes 😂
Selection sort has a few advantages
Less memory write compare to other algorithms. So,may be useful for disk operations or in ROMs. Also an in-place sort.
It is the basic idea of Heap Sort.

What are some examples of Sorting methods in Web development?

I am a TA for algorithms class and we are doing a unit on sorting, and I wanted to a discussion of quicksort. There are many good theoretical discussions of sorting methods on the web showing which one is better in which circumstances...
What are some real-life instances of quick-sort I can give to my student. Especially in the field of Web Development.
does Django use quick-sort?
does React?
does Leaflet use any kind of sort?
In fact, I don't really care about quicksort particularly. Any sorting method will do if I can point to a specific library that uses it. Thanks.
why are my students learning sort? why am i teaching this? i can think of academic or theoretical reasons... basically that we are constantly ordering things - either in their own right or as part of another algorithm. how about for my students, who may never have to write their own sort function?
I'll answer the question "why do we learn how to write a sort function?" Why do we learn to write anything that's already given to us by a library? Hashes, lists, queues, trees... why learn to write any of them?
The most important is to appreciate their performance consequences and when to use which one. For example, Ruby Arrays supply a lot of built in functionality. They're so well done and easy to use that it's easy to forget you're working with a list and write yourself a pile of molasses.
Look at this loop that finds a thing in a list and replaces it.
things.each { |thing|
idx = thing.index(marker)
thing[idx] = stuff
}
With no understanding of the underlying algorithms that seems perfectly sensible.
For each list in the list of things.
Find the item to replace.
Insert a new item in its place.
Two steps per thing. What could be simpler? And when they run it with a small amount of test data it's fine. When they put it into production with a real amount of data and having to do it thousands of times per second it's dog slow. Why? Without an appreciation for what all those methods are doing under the hood, they cannot know.
things.each { |thing| # O(things)
idx = thing.index(marker) # O(thing)
thing[idx] = stuff # O(1)
}
Those deceivingly simple looking Array methods are their own hidden loops. In the worst case each one must scan the whole list. Loops in loops makes this exponentially slow, it's O(n*m). How slow? If things is 1000 items long, and each thing has 1000 items in it that's... 1000 * 1000 or 1,000,000 operations!
And this isn't nearly the amount of trouble students can get into, normally they write O(n!) loops. I actually find it hard to come up with an example I'm so ingrained against it.
But that only becomes apparent after you throw a ton of data at it. While you're writing it, how can you know?
How can they make it faster? Without understanding the other options available to you and their performance characteristics, like hashes and sets and trees, they cannot know. And experienced programmer would make one immediate change to the data structure and change things to a list of sets.
things.each { |thing| # O(things)
thing.delete(marker) # O(1)
thing.add(stuff) # O(1)
}
This is much faster. Deleting and adding with an unordered set is O(1) so it's effectively free no matter how large thing gets. Now if things is 1000 items long, and each thing has 1000 items in it that's 1000 operations. By using a more appropriate data structure I just sped up that loop by 1000 times. Really what I did is changed it from O(n*m) to O(n).
Another solid example is learning how to write a solid comparison function for multi-level data. Why is the Schwartzian transform fast? You can't appreciate that without understanding how sorting works.
You could simply be told these things, sorting is O(n log n), finding something in a list is O(n), and so on... but having to do it yourself gives you a visceral appreciation for what's going on under the hood. It makes you appreciate all the work a modern language does for you.
That said, there's little point in writing six different sort algorithms, or four different trees, or five different hash conflict resolution functions. Write one of each to appreciate them, then just learn about the rest so you know they exist and when to use them. 98% of the time the exact algorithm doesn't matter, but sometimes it's good to know that a merge sort might work better than a quick sort.
Because honestly, you're never going to write your own sort function. Or tree. Or hash. Or queue. And if you do, you probably shouldn't be. Unless you intend to be the 1% that writes the underlying libraries (like I do), if you're just going to write web apps and business logic, you don't need a full blown Computer Science education. Spend that time learning Software Engineering instead: testing, requirements, estimation, readability, communications, etc...
So when a student asks "why are we learning this stuff when it's all built into the language now?" (echos of "why do I have to learn math when I have a calculator?") have them write their naive loop with their fancy methods. Shove big data at it and watch it slow to a crawl. Then write an efficient loop with good selection of data structures and algorithms and show how it screams through the data. That's their answer.
NOTE: This is the original answer before the question was understood.
Most modern languages use quicksort as their default sort, but usually modified to avoid the O(n^2) worst case. Here's the BSD man page on their implementation of qsort_r(). Ruby uses qsort_r.
The qsort() and qsort_r() functions are an implementation of C.A.R. Hoare's ``quicksort'' algorithm, a variant of partition-exchange sorting; in particular, see D.E. Knuth's Algorithm Q. Quicksort takes O N lg N average time. This implementation uses median selection to avoid its O N**2 worst-case behavior.
PHP also uses quicksort, though I don't know which particular implementation.
Perl uses its own implementation of quicksort by default. But you can also request a merge sort via the sort pragma.
In Perl versions 5.6 and earlier the quicksort algorithm was used to implement "sort()", but in Perl 5.8 a mergesort algorithm was also made available, mainly to guarantee worst case O(N log N) behaviour: the worst case of quicksort is O(N**2). In Perl 5.8 and later, quicksort defends against quadratic behaviour by shuffling large arrays before sorting.
Python since 2.3 uses Timsort and is guaranteed to be stable. Any software written in Python (Django) is likely to also use the default Timsort.
Javascript, really the ECMAScript specification, does not say what type of sorting algorithm to use for Array.prototype.sort. It only says that it's not guaranteed to be stable. This means the particular sorting algorithm is left to the Javascript implementation. Like Python, any Javascript frameworks such as React or Leaflet are likely to use the built in sort.
Visual Basic for Applications (VBA) comes with NO sorting algorithm. You have to write your own. This is a bizarre oversight for any language, but particularly one that's designed for business use and spreadsheets.
Almost any table is sorted. Most web apps are backed by SQL database and the actual sorting is performed inside that SQL database. For example SQL query SELECT id, date, total FROM orders ORDER BY date DESC. This kind of sorting uses already sorted database indexes, which are mostly implemented using B-trees (or data structures inspired by B-trees). But if data needs to be sorted on the fly then I think quicksort is usually used.
Sorting, merging of sorted files and binary search in sorted files is often used in big data processing, analytics, ad dispatching, fulltext search... Even Google results are sorted :)
Sometimes you don't need sort, but partial sort, or min-heap. For example in Dijkstra's algorithm for finding shortest path. Which is used (or can be used, or I would use it :) ) for example in route planning (Google Maps).
As pointed out by Schwern, the sorting is almost always provided by the programming language or its implementation engine, and libraries / frameworks just use that algorithm, with a custom comparison function when they need to sort complex objects.
Now if your objective is to have a real life example in the Web context, you could actually use on the contrary the "lack of" sorting method in SVG, and make an exercise out of it. Unlike other DOM elements, an SVG container paints its children in the order they are appended, irrespective of any "z-index" equivalent. So to implement a "z-index" functionality, you have to re-order the nodes yourself.
And to avoid just using a custom comparison function and relying on array.sort, you could add extra constraints, like stability, typically to preserve the current order of nodes with the same "z-index".
Since you mention Leaflet, one of the frustration with the pre 1.0 version (e.g. 0.7.7), was that all vector shapes are appended into the same single SVG container, without any provided sorting functionality, except for bringToFront / bringToBack.

Fast/Area optimised sorting in hardware (fpga)

I'm trying to sort an array of 8bit numbers using vhdl.
I'm trying to find out a method which optimise delay and another which would use less hardware.
The size of the array is fixed. But I'm also interested to extend the functionality to variable lengths.
I've come across 3 algorithms so far:
Bathcher Parallel
Method Green Sort
Van Vorris Sort
Which of these will do the best job? Are there any other methods I should be looking at?
Thanks.
There is a lot of research articles in the matter. You could try to search the web for it. I did a search for "Sorting Networks" and came up with a lot of comparisons of different algorithms and how well they fitted into an FPGA.
The algorithm you choose will greatly depend on which parameter is most important to optimize for, i.e. latency, area, etc. Another important factor is where the values are stored at the beginning and end of the sort. If they are stored in registers, all might be accessed at once, but if you have to read them from a memory with a limited width, you should consider that in your implementation as well, because then you will have to sort values in a stream, and rearrange that stream before saving it back to memory.
Personally, I'd consider something time-constant like merge-sort, which has a constant time to sort, so you could easily schedule the sort for a fixed size array. I'm however not sure how well this scales or works with arbitrary sized arrays. You'd probably have to set an upper limit on array size, and also this approach works best if all data is stored in registers.
I read about this in a book by Knuth and according to that book, the Batcher's parallel merge sort is the fastest algorithm and also the most hardware efficient.

Is there any reason to implement my own sorting algorithm?

Sorting has been studied for decades, so surely the sorting algorithms provide by any programming platform (java, .NET, etc.) must be good by now, right? Is there any reason to override something like System.Collections.SortedList?
There are absolutely times where your intimate understanding of your data can result in much, much more efficient sorting algorithms than any general purpose algorithm available. I shared an example of such a situation in another post at SO, but I'll share it hear just to provide a case-in-point:
Back in the days of COBOL, FORTRAN, etc... a developer working for a phone company had to take a relatively large chunk of data that consisted of active phone numbers (I believe it was in the New York City area), and sort that list. The original implementation used a heap sort (these were 7 digit phone numbers, and a lot of disk swapping was taking place during the sort, so heap sort made sense).
Eventually, the developer stumbled on a different approach: By realizing that one, and only one of each phone number could exist in his data set, he realized that he didn't have to store the actual phone numbers themselves in memory. Instead, he treated the entire 7 digit phone number space as a very long bit array (at 8 phone numbers per byte, 10 million phone numbers requires just over a meg to capture the entire space). He then did a single pass through his source data, and set the bit for each phone number he found to 1. He then did a final pass through the bit array looking for high bits and output the sorted list of phone numbers.
This new algorithm was much, much faster (at least 1000x faster) than the heap sort algorithm, and consumed about the same amount of memory.
I would say that, in this case, it absolutely made sense for the developer to develop his own sorting algorithm.
If your application is all about sorting, and you really know your problem space, then it's quite possible for you to come up with an application specific algorithm that beats any general purpose algorithm.
However, if sorting is an ancillary part of your application, or you are just implementing a general purpose algorithm, chances are very, very good that some extremely smart university types have already provided an algorithm that is better than anything you will be able to come up with. Quick Sort is really hard to beat if you can hold things in memory, and heap sort is quite effective for massive data set ordering (although I personally prefer to use B+Tree type implementations for the heap b/c they are tuned to disk paging performance).
Generally no.
However, you know your data better than the people who wrote those sorting algorithms. Perhaps you could come up with an algorithm that is better than a generic algorithm for your specific set of data.
Implementing you own sorting algorithm is akin to optimization and as Sir Charles Antony Richard Hoare said, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".
Certain libraries (such as Java's very own Collections.sort) implement a sort based on criteria that may or may not apply to you. For example, Collections.sort uses a merge sort for it's O(n log(n)) efficiency as well as the fact that it's an in-place sort. If two different elements have the same value, the first element in the original collection stays in front (good for multi-pass sorting to different criteria (first scan for date, then for name, the collection stays name (then date) sorted)) However, if you want slightly better constants or have a special data-set, it might make more sense to implement your own quick sort or radix sort specific exactly to what you want to do.
That said, all operations are fast on sufficiently small n
Short answer; no, except for academic interest.
You might want to multi-thread the sorting implementation.
You might need better performance characteristics than Quicksorts O(n log n), think bucketsort for example.
You might need a stable sort while the default algorithm uses quicksort. Especially for user interfaces you'll want to have the sorting order be consistent.
More efficient algorithms might be available for the data structures you're using.
You might need an iterative implementation of the default sorting algorithm because of stack overflows (eg. you're sorting large sets of data).
Ad infinitum.
A few months ago the Coding Horror blog reported on some platform with an atrociously bad sorting algorithm. If you have to use that platform then you sure do want to implement your own instead.
The problem of general purpose sorting has been researched to hell and back, so worrying about that outside of academic interest is pointless. However, most sorting isn't done on generalized input, and often you can use properties of the data to increase the speed of your sorting.
A common example is the counting sort. It is proven that for general purpose comparison sorting, O(n lg n) is the best that we can ever hope to do.
However, suppose that we know the range that the values to be sorted are in a fixed range, say [a,b]. If we create an array of size b - a + 1 (defaulting everything to zero), we can linearly scan the array, using this array to store the count of each element - resulting in a linear time sort (on the range of the data) - breaking the n lg n bound, but only because we are exploiting a special property of our data. For more detail, see here.
So yes, it is useful to write your own sorting algorithms. Pay attention to what you are sorting, and you will sometimes be able to come up with remarkable improvements.
If you have experience at implementing sorting algorithms and understand the way the data characteristics influence their performance, then you would already know the answer to your question. In other words, you would already know things like a QuickSort has pedestrian performance against an almost sorted list. :-) And that if you have your data in certain structures, some sorts of sorting are (almost) free. Etc.
Otherwise, no.

What is a bubble sort good for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Do bubble sorts have any real world use? Every time I see one mentioned, it's always either:
A sorting algorithm to learn with.
An example of a sorting algorithm not to use.
Bubble sort is (provably) the fastest sort available under a very specific circumstance. It originally became well known primarily because it was one of the first algorithms (of any kind) that was rigorously analyzed, and the proof was found that it was optimal under its limited circumstance.
Consider a file stored on a tape drive, and so little random access memory (or such large keys) that you can only load two records into memory at any given time. Rewinding the tape is slow enough that doing random access within the file is generally impractical -- if possible, you want to process records sequentially, no more than two at a time.
Back when tape drives were common, and machines with only a few thousand (words|bytes) of RAM (of whatever sort) were common, that was sufficiently realistic to be worth studying. That circumstance is now rare, so studying bubble sort makes little sense at all -- but even worse, the circumstance when it's optimal isn't taught anyway, so even when/if the right situation arose, almost nobody would realize it.
As far as being the fastest on an extremely small and/or nearly sorted set of data, while that can cover up the weakness of bubble sort (to at least some degree), an insertion sort will essentially always be better for either/both of those.
It depends on the way your data is distributed - if you can make some assumptions.
One of the best links I've found to understand when to use a bubble sort - or some other sort, is this - an animated view on sorting algorithms:
http://www.sorting-algorithms.com/
It doesn't get used much in the real world. It's a good learning tool because it's easy to understand and fast to implement. It has bad (O(n^2)) worst case and average performance. It has good best case performance when you know the data is almost sorted, but there are plenty of other algorithms that have this property, with better worst and average case performance.
I came across a great use for it in an optimisation anecdote recently. A program needed a set of sprites sorted in depth order each frame. The spites order wouldn't change much between frames, so as an optimisation they were bubble sorted with a single pass each frame. This was done in both directions (top to bottom and bottom to top). So the sprites were always almost sorted with a very efficient O(N) algorithm.
It's probably the fastest for tiny sets.
Speaking of education. A link to the last scene of sorting out sorting, it's amazing. A must-see.
It's good for small data sets - which is why some qsort implementations switch to it when the partition size gets small. But insertion sort is still faster, so there's no good reason to use it except as a teaching aid.
we recently used bubblesort in an optimality proof for an algorithm. We had to transform an arbitrary optimal solution represented by a sequence of objects into a solution that was found by our algorithm. Because our algorithm was just "Sort by this criteria", we had to prove that we can sort an optimal solution without making it worse. In this case, bubble sort was a very good algorithm to use, because it has the nice invariant of just swapping two elements that are next to each other and are in the wrong order. Using more complicated algorithms there would have melted brains, I think.
Greetings.
I think it's a good "teaching" algorithm because it's very easy to understand and implement. It may also be useful for small data sets for the same reason (although some of the O(n lg n) algorithms are pretty easy to implement too).
I can't resist responding to any remarks on bubble sort by mentioning the faster (seems to be O(nlogn), but this is not really proven) Comb Sort. Note that Comb sort is a bit faster if you use a precomputed table. Comb sort is exactly the same as bubble sort except that it doesn't initially start by swapping adjacent elements. It's almost as easy to implement/understand as bubble sort.
Bubble sort is easy to implement and it is fast enough when you have small data sets.
Bubble sort is fast enough when your set is almost sorted (e.g. one or several elements are not in the correct positions), in this case you better to interlace traverses from 0-index to n-index and from n-index to 0-index.
Using C++ it can be implemented in the following way:
void bubbleSort(vector<int>& v) { // sort in ascending order
bool go = true;
while (go) {
go = false;
for (int i = 0; i+1 < v.size(); ++i)
if (v[i] > v[i+1]) {
swap(v[i], v[j]);
go = true;
}
for (int i = (int)v.size()-1; i > 0; --i)
if (v[i-1] > v[i]) {
swap(v[i-1], v[i]);
go = true;
}
}
}
It can be good if swap of two adjacent items is chip and swap of arbitrary items is expensive.
Donald Knuth, in his famous "The Art of Computer Programming", concluded that "the bubble sort seems to have nothing to recommend it, except a catchy name and the fact that it leads to some interesting theoretical problems".
Since this algorithm is easy to implement it is easy to support, and it is important in real application life cycle to reduce effort for support.
I used to use it in some cases for small N on the TRS-80 Model 1.
Using a for loop, one could implement the complete sort on one program line.
Other than that, it is good for teaching, and sometimes for lists that are nearly in sorted order.
I once used it for a case where the vast majority of the time it would be sorting two items.
The next time I saw that code, someone had replaced it with the library sort. I hope they benchmarked it first!
It's quick and easy to code and (nearly impossible to do wrong). It has it's place if you're not doing heavy lifting and there's no library sorting support.
It is the sort I use most often actually. (In our project, we cannot use any external libraries.)
It is useful when I know for sure that data set is really small, so I do not care one bit about speed and want shortest and simplest code.
Bubble is not the lowest you can go. Recently, I was in a situation when I needed to sort exactly three elements. I wrote something like this:
// Use sort of stooge to sort the three elements by cpFirst
SwapElementsIfNeeded(&elementTop, &elementBottom);
SwapElementsIfNeeded(&elementTop, &elementMiddle);
SwapElementsIfNeeded(&elementMiddle, &elementBottom);
*pelement1 = elementTop;
*pelement2 = elementMiddle;
*pelement3 = elementBottom;
Oh yes, it is a good selection mechanism. If you find it in code written by someone, you don't hire him.
Mostly nothing. Use QuickSort or SelectionSort instead...!

Resources