I need more Sorting algorithms instead of these :
Insertion
Selection
Bubble
Shell
Merge
Heap
Quick
Radix
can anyone help ?
Quick sort is best sorting algorithms: QuickSort is a Divide and Conquer algorithm. It picks an element as pivot, movies smaller than pivot to left, moves larger than pivot to right.
Like QuickSort, Merge Sort is a Divide and Conquer algorithm. recursively divided in two halves till the size becomes 1, merge them back together sorted.
Bubble Sort is the simplest sorting algorithm. It is not used in the real world, since it is not very efficient. Elements bubble to front and back.
Selection sort algorithm sorts an array by repeatedly finding the minimum/maximum element. This algorithm maintains two subarrays, one subarray contains selected list which is sorted and other subarray contains remaining items which are not sorted.
Look at this links
http://www.sorting-algorithms.com/
https://www.cs.usfca.edu/~galles/visualization/ComparisonSort.html
Related
In the classical merge sort algorithm, one typically divides the input array until they have several single-element subarrays prior to merging the elements back together. But, it's well-known that you can modify this mergesort algorithm by splitting the arrays until you have, say, k subarrays, each of size n/k (n is the original length of the array). You can then use insertion sort to sort each one of those k subarrays and merge them using the merge subroutine.
Intuitively, I think that this should be better than just merge sort in some cases because insertion sort is fast on small arrays. But I want to figure out precisely when this hybrid algorithm is better than the regular merge sort algorithm. I don't think it would be better for small k because as k approaches 1, we'd just be using the insertion sort algorithm. I think there is some optimal ratio n/k, but I am not so sure how to find it.
Any help is appreciated.
Im having a trouble with this sorting question.
Describe an algorithm that sort an array with the conditions:
1.In the sorted array there are 3 possible differences (k1,k2,k3 all natural numbers) between every adjacent elements.
2.In the sorted array there are 3 possible differences (k1,k2=2k1,k3=3k1 all rational numbers) between every adjacent elements.
I was able to find the differences in both questions in a linear time O(n) but im stuck on O(nlogn) in the sotring part.
Trying maybe to an O(n) time or maybe O(nloglogn) by maybe refer to k1,k2,k3 as a really small numbers and use counting sort.
Thanks.
My book says:
Suppose you have a group of N numbers and would like to determine the kth largest. This is known as the selection problem. Most students who have had a programming course or two would have no difficulty writing a program to solve this problem. There are quite a few “obvious” solutions. One way to solve this problem would be to read the N numbers into an array, sort the array in decreasing order.
It says that it would make sense to sort the array in decreasing order. How does that make sense? If I have an array of {1,9,3,7,4,6} and I want the greatest element, I would sort it in an increasing order so {1,3,4,6,7,9} and then return the last element. Why would the book say in decreasing order?
Because you may not want the largest element, the book says
would like to determine the kth largest
If you sort it in ascending order, how do you know what the, say, 3rd largest number is without first finding out how big the array is?
This would be easier if the array was descending, as the 3rd largest will simply be the 3rd element.
The order itself is not that important, but if you want to k-th largest element, then if you sort in descending order, it is located at the k-th element (or k-1 if we start with index 0), whereas if we sort in ascending order, it is located at index n-k+1 (or n-k if the index starts at 0).
For lazy sorting algorithms (like the ones in Haskell and C# Linq's .OrderBy), this can in fact have implications with respect to time complexity. If we implement a lazy selection sort algorithm (so a generator), then this will run in O(k×n) instead of O(n2). If we use for example a lazy variant of QuickSort, it will take O(n + k log n) to obtain the first k elements.
In a language like Haskell, where laziness is really a key feature, one typically does not only aim to minimize the time complexity of the algorithm producing the entire result, but also producing subsets of the result.
For sorting algorithms, why can't you just cut the array in half, and just use selection sort or insertion sort on both, and put them back together, to significantly improve the speed?
You're saying that your algorithm is faster than existing sorts, for example, selection sort and insertion sort. But then, once you've split your array in half, you'd be better using your algorithm rather than selection/insertion sort to sort the halves (perhaps unless the halves are small).
This is exactly merge-sort.
You are right. This approach is followed in some sorting algorithms. For example in Merge sort which divides the array into two halves and if these two halves are small, you can apply insertion sort directly on them but if they are large then it would not be feasible as you better divide the halves too (please see the details of Merge sort) . Insertion sort/Selection sort/Bubble sort perform better when array is small or generally on nearly sorted data. If you are tackling long data then choose Merge sort/Quick sort/Redix sort.
I got a problem. I'm very confused over shell sort and insertion sort algorithms. How should we distinguish from each other?
Shell sort is a generalized version of Insertion sort. The basic priciple is the same for both algorithms. You have a sorted sequence of length n and you insert the unsorted element into it - and you get n+1 elements long sorted sequence.
The difference follows: while Insertion sort works only with one sequence (initially the first element of the array) and expands it (using the next element).
However, shell sort has a diminishing increment, which means, that there is a gap between the compared elements (initially n/2). Hence there are n/2 sequences to be sorted using insertion sort. In each step the increment is shrinked (often just divided by 2.2) and the number of sequences is reduced. In the last step there is no gap and the algorithm degenerates to simple insertion sort.
Because of the diminishing increment, the large and small elements are moved rapidly to correct part of the array and than in the last step sorted using insertion sort really fast. This leads to reduced time complexity O(n^(4/3))
You can implement insertion sort as a series of comparisons and swaps of contiguous elements. That makes it a "stable sort". Shell sort, instead, compares and swaps elements which are far from each other. That makes it faster.
I suppose that your confusion comes from the fact that shell sort can be implemented as several insertion sorts applied to different subsets of the data. Note that these subsets are composed of noncontiguous elements of the data sequence.
See the Wikipedia for more details ;-)
The insertion sort is a simple, in-place, O(N^2) sort. Shell sort is a little more complex and harder to understand, and somewhere around O(N^(5/4)). Check the links out for examples -- it should be easy to see the difference.