Why do we need so many sorting techniques? - algorithm

There is a plethora of sorting techniques in data structure as follows -
Selection Sort
Bubble Sort
Recursive Bubble Sort
Insertion Sort
Recursive Insertion Sort
Merge Sort
Iterative Merge Sort
Quick Sort
Iterative Quick Sort
Heap Sort
Counting Sort
Radix Sort
Bucket Sort
Shell Sort
Tim Sort
Comb Sort
Pigeonhole Sort
Cycle Sort
Cocktail Sort
Strand Sort
and many more.
Do we need all of them?

There’s no single reason why so many different sorting algorithms exist. Here’s a sampler of sorting algorithms and where they came from, to give a better sense of their origins:
Radix sort was invented in the late 1800s for physically sorting punched cards for the US census. It’s still used today in software because it’s very fast on numeric and string data.
Merge sort appears to have been invented by John von Neumann to validate his stored-program computer model (the von Neumann architecture). It works well as a sorting algorithm for low-memory computers processing data that’s streamed through the machine, hence its popularity in the 1960s and 1970s. And it’s a great testbed for divide-and-conquer techniques, making it popular in algorithms classes.
Insertion sort seems to have been around forever. Even though it’s slow in the worst case, it’s fantastic on small inputs and mostly-sorted data and is used as a building block in other fast sorting algorithms.
Quicksort was invented in 1961. It plays excellently with processor caches, hence its continued popularity.
Sorting networks were studied extensively many years back. They’re still useful as building blocks in theoretical proof-of-concept algorithms like signature sort.
Timsort was invented for Python and was designed to sort practical, real-world sequences faster than other sorts by taking advantage of common distributions and patterns.
Introsort was invented as a practical way to harness the speed of quicksort without its worst-case behavior.
Shellsort was invented over fifty years ago and was practical on the computers of its age. Probing its theoretical limits was a difficult mathematical problem for folks who studied it back then.
Thorup and Yao’s O(n sqrt(log log n))-time integer sorting algorithm was designed to probe the theoretical limits of efficient algorithms using word-level parallelism.
Cycle sort derives from the study of permutations in group theory and is designed to minimize the number of memory writes made when sorting the list.
Heapsort is noteworthy for being in-place and yet fast in practice. It’s based on the idea of implicitly representing a nontrivial data structure.
This isn’t even close to an exhaustive list of sorting algorithms, but hopefully gives you a sense of what’s out there and why. :-)

The main reason sorting algorithms are discussed and studied in early computer science classes if because they provide very good studying material. The problem of sorting is simple, and a good excuse to present several algorithm strategies; several data structures; how to implement them; and discuss time complexity and space complexity; and different properties algorithms can have even if they apparently solve the same problem.
In practice, standard libraries for programming languages usually include a default sort function, such as std::sort in C++ or list.sort in python; and in almost every situation, you should trust that function and the algorithm it uses.
But everything you've learned about sorting algorithms is valuable and can be applied to other problems. Here is a non-exhaustive list of things that can be learned by studying sorting algorithms:
divide and conquer;
heaps;
binary search trees, including different types of self-balancing binary search trees;
the importance of choosing an appropriate data-structure;
difference between in-place and not-in-place;
difference between stable and non-stable sort;
recursive approach and iterative approach;
how to calculate the time complexity, and how to compare the efficiency of two algorithms;

Besides educational reasons, We need multiple sorting algorithms because they work best in a couple of situations, and none of them rules them all.
For example, although the mean time-complexity of quicksort is impressive, its performance on nearly sorted array is horrible.

Related

what is the best algorithm of sorting in speed

There's bubble, insert, selection, quick sorting algorithm.
Which one is the 'fastest' algorithm?
code size is not important.
Bubble sort
insertion sort
quick sort
I tried to check speed. when data is already sorted, bubble, insertion's Big-O is n but the algorithm is too slow on large lists.
Is it good to use only one algorithm?
Or faster to use a different mix?
Quicksort is generally very good, only really falling down when the data is close to being ordered already, or when the data has a lot of similarity (lots of key repeats), in which case it is slower.
If you don't know anything about your data and you don't mind risking the slow case of quick sort (if you think about it you can probably make a determination for your case if it's ever likely you'll get this (from already ordered data)) then quicksort is never going to be a BAD choice.
If you decide your data is or will sometimes (or often enough to be a problem) be sorted (or significantly partially sorted) already, or one way and another you decide you can't risk the worst case of quicksort, then consider timsort.
As noted by the comments on your question though, if it's really important to have the ultimate performance, you should consider implementing several algorithms and trying them on good representative sample data.
HP / Microsoft std::sort is introsort (quick sort switching to heap sort if nesting reaches some limit), and std::stable_sort is a variation of bottom up mergesort.
For sorting an array or vector of mostly random integers, counting / radix sort would normally be fastest.
Most external sorts are some variation of a k-way bottom up merge sort (the initial internal sort phase could use any of the algorithms mentioned above).
For sorting a small (16 or less) fixed number of elements, a sorting network could be used. This seems to be one of the lesser known algorithms. It would mostly be useful if having to repeatedly sort small sets of elements, perhaps implemented in hardware.

which sorting algorithm to use where? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There are varieties of sorting algorithms available. Sorting algorithm with time complexity of O(n^2) may be suited over O(nlogn), because it is in-place or it is stable. For example:
For somewhat sorted things insertion sort is good.
Applying quick sort on nearly sorted array is foolishness.
Heap sort is good with O(nlogn) but not stable.
Merge sort can't be used in embedded systems as in worst case it requires O(n) of space complexity.
I want to know which sorting algorithm is suitable in what conditions.
Which sorting algo is best for sorting names in alphabetical order?
Which sorting algo is best for sorting less integers?
Which sorting algo is best for sorting less integers but may be large in range (98767 – 6734784)?
Which sorting algo is best for sorting billions of integers?
Which sorting algo is best for sorting in embedded systems or real time systems where space and time both are constraints?
Please suggest these/other situations, books or website for these type of comparisons.
Well, there is no silver bullet - but here are some rules of thumb:
Radix sort/ Counting sort is usually good when the range of elements (let it be U) is relatively small comparing to the number of elements (U<<n) (might fit your case 2,4)
Insertion sort is good for small (say n<30) lists, even faster then O(nlogn) algorithms (empirically). In fact, you can optimize an O(nlogn) top-down algorithm by switching to insertion sort when n<30
A variation of radix sort might also be a good choice for sorting strings alphabetically, since it is O(|S|*n), while normal comparing based algorithm is O(|S|*nlogn) [where |S| is the length of your string]. (fits your case 1)
Where the sorted input is very large, way too large to fit in merge, the way to do it is with external sort - which is a variation or merge sort, it minimizes the number of disk reads/writes and makes sure these are done sequentially - because it improves the performance drastically. (might fit case 4)
For general case sorting, quick sort and timsort (used for java)
gives good performance.
Merge sort can't be used in embedded systems as in worst case it
requires O(n) of space complexity.
You may be interested in the stable_sort function from C++. It tries to allocate the extra space for a regular merge sort, but if that fails it does an in-place stable merge sort with inferior time complexity (n * ((log n)^2) instead of n * (log n)). If you can read C++ you can look at the implementation in your favourite standard library, otherwise I expect you can find the details explained somewhere in language-agnostic terms.
There's a body of academic literature about in-place stable sorting (and in particular in-place merging).
So in C++ the rule of thumb is easy, "use std::stable_sort if you need a stable sort, otherwise use std::sort". Python makes it even easier again, the rule of thumb is "use sorted".
In general, you will find that a lot of languages have fairly clever built-in sort algorithms, and you can use them most of the time. It's rare that you'll need to implement your own to beat the standard library. If you do need to implement your own, there isn't really any substitute for pulling out the textbooks, implementing a few algorithms with as many tricks as you can find, and testing them against each other for the specific case you're worried about for which you need to beat the library function.
Most of the "obvious" advice that you might be hoping for in response to this question is already incorporated into the built-in sort functions of one or more common programming languages. But to answer your specific questions:
Which sorting algo is best for sorting names in alphabetical order?
A radix sort might edge out standard comparison sorts like C++ sort, but that might not be possible if you're using "proper" collation rules for names. For example, "McAlister" used to be alphabetized the same as "MacAlister", and "St. John" as "Saint John". But then programmers came along and wanted to just sort by ASCII value rather than code a lot of special rules, so most computer systems don't use those rules any more. I find Friday afternoon is a good time for this kind of feature ;-) You can still use a radix sort if you do it on the letters of the "canonicalized" name rather than the actual name.
"Proper" collation rules in languages other than English are also entertaining. For example in German "Grüber" sorts like "Grueber", and therefore comes after "Gruber" but before "Gruhn". In English the name "Llewellyn" comes after "Lewis", but I believe in Welsh (using the exact same alphabet but different traditional collation rules) it comes before.
For that reason, it's easier to talk about optimizing string sorts than it is to actually do it. Sorting strings "properly" requires being able to plug in locale-specific collation rules, and if you move away from a comparison sort then you might have to re-write all your collation code.
Which sorting algo is best for sorting less integers?
For a small number of small values maybe a counting sort, but Introsort with a switch to insertion sort when the data gets small enough (20-30 elements) is pretty good. Timsort is especially good when the data isn't random.
Which sorting algo is best for sorting less integers but may be large in range (98767 – 6734784)?
The large range rules out counting sort, so for a small number of widely-ranged integers, Introsort/Timsort.
Which sorting algo is best for sorting billions of integers?
If by "billions" you mean "too many to fit in memory" then that changes the game a bit. Probably you want to divide the data into chunks that do fit in memory, Intro/Tim sort each one, then do an external merge. Of if you're on a 64 bit machine sorting 32 bit integers, you could consider counting sort.
Which sorting algo is best for sorting in embedded systems or real time systems where space and time both are constraints?
Probably Introsort.
For somewhat sorted things insertion sort is good.
True, and Timsort takes advantage of the same situation.
Applying quick sort on nearly sorted array is foolishness.
False. Nobody uses the plain QuickSort originally published by Hoare, you can make better choices of pivot that make the killer cases much less obvious than "sorted data". To deal with the bad cases thoroughly there is Introsort.
Heap sort is good with O(nlogn) but not stable.
True, but Introsort is better (and also not stable).
Merge sort can't be used in embedded systems as in worst case it requires O(n) of space complexity.
Handle this by allowing for somewhat slower in-place merging like std::stable_sort does.

Real world examples to decide which sorting algorithm works best

I am risking this question being closed before i get an answer, but i really do want to know the answer. So here goes.
I am currently trying to learn algorithms, and I am beginning to understand it as such but cannot relate to it.
I understand Time Complexity and Space Complexity. I also do understand some sorting algorithms based on the pseudo code
Sorting algorithms like
Bubble Sort
Insertion Sort
Selection Sort
Quicksort
Mergesort
Heapsort (Some what)
I am also aware of Best Case and Worst Case scenarios(Average case not so much).
Some online relevant references
Nice place which shows all the above graphically.
This gave me a good understanding as well.
BUT my question is - can some one give me REAL WORLD EXAMPLES where these sorting algorithms are implemented.
As the number of elements increases, you will use more sophisticated sorting algorithms. The later sorting techniques have a higher initial overhead, so you need a lot of elements to sort to justify that cost. If you only have 10 elements, a bubble or insertion sort will be the much faster than a merge sort, or heapsort.
Space complexity is important to consider for smaller embedded devices like a TV remote, or a cell phone. You don't have enough space to do something like a heapsort on those devices.
Datebases use an external merge sort to sort sets of data that are too large to be loaded entirely into memory. The driving factor in this sort is the reduction in the number of disk I/Os.
Good bubble sort discussion, there are many other factors to consider that contribute to a time and space complexity.
Sorting-Algorithms.com
One example is C++ STL sort
as the wikipedia page says:
The GNU Standard C++ library, for example, uses a hybrid sorting
algorithm: introsort is performed first, to a maximum depth given by
2×log2 n, where n is the number of elements, followed by an insertion
sort on the result.1 Whatever the implementation, the complexity
should be O(n log n) comparisons on the average.[2]

Efficiency of Sort Algorithms

I am studying up for a pretty important interview tomorrow and there is one thing that I have a great deal of trouble with: Sorting algorithms and BigO efficiencies.
What number is important to know? The best, worst, or average efficiency?
worst, followed by average. be aware of the real-world impact of the so-called "hidden constants" too - for instance, the classic quicksort algorithm is O(n^2) in the worst case, and O(n log n) on average, whereas mergesort is O(n log n) in the worst case, but quicksort will outperform mergesort in practice.
All of them are important to know, of course. You have to understand the benefits of one sort algorithm in the average case can become a terrible deficit in the worst case, or the worst case isn't that bad, but the best case isn't that good, and it only works well on unsorted data, etc.
In short.
Sorting algorithm efficiency will vary on input data and task.
sorting max speed, that can be archived is n*log(n)
if data contains sorted sub data, max speed can be better then n*log(n)
if data consists of duplicates, sorting can be done in near linear time
most of sorting algorithms have their uses
Most of quick sort variants have its average case also n*log(n), but thy are usually faster then other not heavily optimized algorithms. It is faster when it is not stable, but stable variants are only a fraction slower. Main problem is worst case. Best casual fix is Introsort.
Most of merge sort variants have its best, average and worst case fixed to n*log(n). It is stable and relatively easy to scale up. BUT it needs a binary tree (or its emulation) with relative to the size of total items. Main problem is memory. Best casual fix is timsort.
Sorting algorithms vary also by size of input. I can make a newbie claim, that over 10T size data input, there is no match for merge sort variants.
I recommend that you don't merely memorize these factoids. Learn why they are what they are. If I was interviewing you, I would make sure to ask questions that show my you understand how to analyze an algorithm, not just can spit back out something you saw on a webpage or in a book. Additionally, the day before an interview is not the time to be doing this studying.
I wish you the best of luck!! Please report back in a comment how it went!
I am over with one set of interviews at my college just now...
Every algorithm has its benefits, otherwise it wont exist.
So, its better to understand what is so good with the algorithm that you are studying. Where does it do well? How can it be improved?
I guess you'll automatically need to read various efficiency notations when you do this. Mind the worst case, and pay attention to the average case, best cases are rare.
All the best for your interview.
You may also want to look into other types of sorting that can be used when certain conditions exist. For example, consider Radix sort. http://en.wikipedia.org/wiki/Radix_sort

What should students be taught first when first learning sorting algorithms? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If you were a programming teacher and you had to choose one sorting algorithm to teach your students which one would it be? I am asking for only one because I just want to introduce the concept of sorting. Should it be the bubble sort or the selection sort? I have noticed that these two are taught most often. Is there another type of sort that will explain sorting in an easier to understand way?
I'm not sure I could be a computer science teacher and only teach one sorting algorithm.
At a bare minimum the students should be taught at least one of each of the major sorting types, namely an exchanging sort, a selection sort, an insertion sort, and a merge sort. In addition to one of each of these types, I would also cover Quicksort which falls under the partitioning sort heading.
As for the specific sorts from each of the types that I would cover:
Bubble Sort as an example of the exchanging sort category as it is one of the simplest sort types and one that they likely have already seen or stumbled across on their own. This is also the sort they will most likely use the most often if they have to roll their own for something simple so it makes sense for them to understand it.
Heapsort from the selection sort category. This would likely be the most confusing sort next to Quicksort but once they understand how it sets them up for understanding the Quicksort algorithm.
Insertion Sort as an example of the insertion sort category would be covered as it is also a simple sort that they can get a lot of use out of. Since they might encounter scenarios where they need to merge two lists together, knowing this one makes sense.
Merge Sort is somewhat of a specialized sort so you don't see it too often, but it is very good at what it does and is still useful in situations where you need to sort data that you are getting in a piecemeal fashion.
If I had to narrow things down to just one sort that I could teach, but I had the time to make sure that the student understood exactly what was going on then I would teach Quicksort. While it is not easy to grasp, the vast majority of frameworks out there use it for their sorting algorithm so an understanding of how it works is useful in your development with that framework. Also, it is quite likely that if someone is able to understand Quicksort then they should be able to learn the Bubble Sort and Insertion Sort on their own.
Let your students decide.
Before introducing any sorting algorithms, give each student some playing cards, maybe 10 or so. Then ask them to sort the cards. Have them write down the steps they take, which will essentially be their algorithm. Likely they'll discover insertion or selection sort. You can also ask them to estimate how many steps their sort would take if they had 100, or 1000 cards, which is a good lead into big O notation.
PS - does anybody think they'd discover bubble sort?
No matter what sorting algorithm is taught, if the students don't also learn about bogosort, they're missing out, and the teacher is losing an obvious way of engaging his audience :)
If you want to teach sorting, then a bubble sort is probably the easiest to comprehend. If you want to teach sorting algorithms, then you should really be teaching quicksort, mergesort, insertsort and possibly even heapsort so that students can get a feel for the tradeoffs between various sorting methods.
I'd start by showing insertion sort. Everyone who's sorted a hand of cards (which is basically everybody) knows this sort. Plus it's doesn't have the abysmal performance of bubble sort.
I learned Bubble Sort first -- I think if you do only one, you probably need to do one of the O(n^2) algorithms because they are easier to understand.
There are a lot of sort visualizers out there to help quickly show a comparison:
http://www.cs.ubc.ca/~harrison/Java/sorting-demo.html
http://math.hws.edu/TMCM/java/xSortLab/
http://vision.bc.edu/~dmartin/teaching/sorting/anim-html/all.html
http://maven.smith.edu/~thiebaut/java/sort/
Selection sort is probably the most straightforward sorting algorithm to teach and simplest to grasp. It's also a great example of why simple solutions are not always the best, which could lead into a discussion of more interesting and faster sorts like mergesort and quicksort.
I think sorting methods are a very good example of what an algorithm is. But it only becomes a good example when you compare various methods. If you can only teach one, I'm not sure it's worthwhile.
Realisticly, we call the sort method in some framework. So if you can't effectively teach about the sorting alorithms, it might not be worth the time.
Nobody has to write a sort method anymore, but it's still a very good example of the impact of algorithms.
I would advise both Selection and Merge Sort on the general sorting algorithms. Both are relatively straight forward compared to their friends. But if you can only teach one and the people can handle it, I would go with Merge Sort. Because merge sort is efficient, stable, and teaches several important concepts (merging, divide and conquer).
Selection Sort:
Find the minimum/maximum value and put it in place, then do it again on the rest of the list...this is very intuitive. For me selection is much more intuitive than even bubble sort. You could probably teach this in very little time. And there will be a feeling of accomplishment by people implementing this... Although Bubble and Insertion can quit early and will not waste time on a sorted list. Selection is probably the easiest to do. Insertion is probably the hardest to implement.
MergeSort:
This will teach how to merge (which is important because there are a ton of speedups you can get by merging two sorted lists together) as well as how to divide the problem into smaller sub problems (important for dealing with hierarchical structures and also used in quick sort). QuickSort though faster is much harder to understand. You need two scanning variables, a pivot item, etc... Merge is more straight forward and merging will come in handy for things other than sorting (like joining two records together that are sorted on a field)....
The basic concept of a merge is intuitive. Take the lesser item from a sorted list and put that in the final list. And dividing the merge sort problem up is also kind of intuitive.
mergesort(first half)
mergesort(second half)
merge (first half, second half)
If you can only teach one sort and have to do it quickly (or the audience isn't really interested at all) I would tend towards Selection because this is harder and selection could be taught quickly. But if the audience is more motivated then I would do Merge because it teaches so many additional fundamental concepts.
Of the more efficient sorts this one is much easier than quick and heap sort. Although quick sort is probably the quickest in practice (as long as you have plenty of ram) and heap is probably able to sort the largest lists (if implemented non recursively).
Bucket Sort:
I would touch on this because it is also intuitive and represents another type of sorting algorithm. In many situations this is appropriate and the O(n) time is very attractive.
Counting Sort:
It has its uses. There are some cases where this is very efficient...It is also relatively easy.
Uhm... sorting algorithms are really a intuitive, concrete way of teaching some good lessons about Big O notations, optimization, common sense and computer science in general, and maybe a "bubble sort" + "select sort" + "merge sort" approach might be useful, for comparison. I think the most intuitive (and efficient in many cases) is the select sort.
I would teach the "call the framework" sort.
It would be efficient to teach and learn this sort. Students would have a high degree of success and low error rate with implementing this sort.
Edit:
There are a lot of criticizing comments on this answer about the quality of my single sort class. These criticisms are applicable to any answer to this question, which is -
"If you were a programming teacher and you had to choose one sorting algorithm to teach your students which one would it be?"
I think that only teaching one sort is crippling. Remember that Everything Is Fast For Small n, so in terms of teaching a sort, it is not a question of performance, but understanding the algorithm.
I think the intro should be the bubble sort to introduce the concept, but at a minimum introduce another sort to get the students thinking about other ways to perform the task. Make sure they understand the tradeoff in performance and code complexity and that there are different tools for different situations.
It would be radix sort. Because it is surprisingly easy and yet non-obvious. Things like that just have to be taught.
Bubble sort is the classic, and it is very easy to grasp why it works.
But unless this is a very introductory course then it should include an overview of other sorting algorithms since this is by far the best way to show the trade offs involved in algorithm design and explain best case vs worst case vs average behaviour (both runtime and memory).
I thought selection sort was the simplest to comprehend, and IMO would be the best to introduce sorting.
I think it would be silly to not teach them at least one O(nlog(n)) sorting algorithm, along with an explanation of big O notation.
Have everyone bring in a deck of cards, and pick out one suit.
Divide into teams of two.
One shuffles the 13 cards, and lays them face down in a row.
The partner gets to point to two of the cards.
The other picks them up, looks at them, and either says "In order" or "Not in order"
The parter then can tell him to swap them, and lay them down again (face down)
The job of the partner is to sort the cards in order (something you will need to define up front).
When the parner thinks they are sorted, he says "Stop".
The other turns the cards face up, and they check.
When this is done, discuss what worked for everyone.
Then talk about BUBBLE SORT vs SELECTION sort
Then talk about QUICK SORT.
Have them try each of them out, with their suit of cards.
If you want to teach the concept of sorting, then I believe you must teach at least two different ways of sorting -- otherwise the students will think that sorting is just about the one way you taught them.
I think the students would get the most value out of learning a classic O(n^2) algorithm (preferably insertion or selection sort, but please not bubble sort, which has no justification for being used in real applications), as well as a divide-and-conquer, O(nlogn) algorithm such as quicksort or merge sort.
If you're worried that these sorts will be too hard to teach your students, have a look at these activities from Computer Science Unplugged, which are designed for elementary school students.
best lecture I ever attended on algorithms demonstrated all of them as bar-graphs drawn via java applets. You could then see the pattern of sorting as well as the O(N) characteristics as well...
Teach bubble because its incredibly simple and show the rest as animations maybe?
Bubble sort is the easiest one for beginners to grok, and it also serves as an excellent example of why easy solutions can be expensive. Could be a good way to segue into asymptotic complexity and big-O notation?
I'd teach Bubble sort if I had to pick only one, since it's simpler to grasp (I think). However, the most valuable thing I got out of studying the various sorting algorithms was that there's more than one way to do it (which eventually led me to Perl, but that's another story), each with advantages and drawbacks. So, learning a single sorting algorithm might be a way to miss the critical aspect of the whole thing.
Quicksort is definitely another very simple-to-understand sorting algorithm, and it's a cinch to implement it. It has in-place amd "out-of-place" versions that are fun to think about. And it's quick.
Quick enough for you, at least, old man.
Er, you should be giving them an O(n^2) and an O(n log n) sort. Bubble and Heap would be my picks.
Regardless of the actual algorithm you'll choose, I advice you to present two algorithms (instead of one) and explain tradeoffs (in memory, execution speed, etc) they make.
Then you should hammer it down that there is no such thing as an ideal sorting algorithm that could be blindly applied to any task. There are good candidates, however. At this point, show them quicksort and introspection sort (as homework) and conside your task competed :)
Keeping in mind that your student will probably never have to code a sort themselves, the question really should be decided on "what is the value of learning this algorithm?" instead of "what does this algorithm do?"
In other words, your students should be taught a variety of different algorithms -- whether they are for sorting, searching, compressing etc is largely irrelevant.
IntroSort is a great algorithm to teach as well, once students have learned to understand Quick and Heap sort. IntroSort (short for Introspective sort) starts as a QuickSort and switches to a HeapSort if the recursion depth exceeds a level based on the logarithm of the number of elements being sorted.
Overall, you get the best of the Quick and Heap sort worlds and a worst case running time of O(n log n).
IntroSort on Wikipedia
I would compare and contrast an o(n-squared), like selection sort with a o(n log n) sort like quicksort.
But if they aren't computer science people and don't necessarily intend to be then just show an easy to understand sort like selection or bubble.
I just did that last week with my kid
I ask him to come up with their own sorting algorithm, and asked how long it would take to sort a deck of cards using his method.
Then I showed him Bubble Sort ( the easiest to explain to a kid) " It's a bubble!" and made him count the steps again. He realized then it was easier and faster.
Then If you want you can teach more advanced ones ( Quicksort is the most surprising ),
Ideally you should teach three at least that are different ( not variation of the same)
If you can ultimately only teach ONE algorithm at all, I think SelectionSort is even easier to teach (and implement IMHO) as BubbleSort. It might be ultra ineffectively, but everyone will get its concept, everyone with a little programming knowledge can read its code and there is one thing that makes SelectionSort rather unique among all sorting algorithms: It only needs n - 1 swaps for a list of n entries :-) Not only that it never needs to make more than n - 1 swaps, it actually needs exactly n - 1 swaps. The number of swaps performed is 100% deterministic and known before it even starts. I think this is not true for any other in-place sorting algorithm (though people are free to correct me in comments).
Otherwise I vote for BubbleSort and SelectionSort, those are easiest to understand and show how sorting the easy way (these are simple enough, students may actually come up with these solutions without ever having heard of any sorting algorithm) is sometimes not very effective. In contrast I'd show QuickSort and MergeSort, being very good divide and conqueror algorithms and both also very fast.
Algorithms I'd not use, as their concept is harder to get, are ShellSort, RadixSort, HeapSort, and BucketSort. Also they are rather special (you usually only use them when sorting certain kind of data or if you have certain constrains for sorting).
You might mention the little sister of ShellSort, InsertionSort (which is basically a shell sort where the step width has finally reach one), but only because many QuickSort implementation use InesrtionSort as fall-back sort, once the recursion gets too deep (or the remaining data set to sort recursively gets too small)... but here it starts getting really complicated and you shouldn't make it too complicated.
They should be first asked to implement their own sorting algorithm - from scratch, without reference to any existing algorithm beforehand. Doing this will teach them about sorting way more than telling them of an algorithm that they wouldn't have any idea how it was derived.
i think the movie Sorting Out Sorting, which was produced 30 years ago, is still a pretty good instructional aid in gaining better understand of sort algorithms .. heres the 12x speed version

Resources