Why this property of quick sort? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Why does quick sort sort a given set of already sorted items and reverse sorted items with equal speed.
Why not others like heap sort, insertion sort, or selection sort?

Standard selection sort, heap sort and quick sort are not adaptive as insertion sort is.
Look at the table of sorting algorithms comparison.
For example, both the best and the worst case for selection sort has complexity O(n^2) - every cycle run always walks through predetermined piece of array.
Speed of quick sort depends on proper choice of partition elements for given data set. Sorted order may influence, if partition is silly (for example, one always gets the first element of piece), otherwise probability of the worst (quadratic) case is rather small.
If you need to sort almost sorted datasets, choose some adaptive sorting, for example, natural merge sort (or insertion sort for small datasets).

Related

which sorting algorithm for linked list? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Which sorting algorithm fits best for singly linked lists and double linked lists that have less than 20 items or almost sorted list? I try to understand which sorting algorithms fit for small lists I understand for arrays but do not understand how it is for linked lists.
Insertion sort works quite fast for nearly sorted arrays and will be a good option for a doubly linked list too. For small sized inputs, it doesn't really matter which algorithm you prefer since all of them will end up in constant time. Note however, advanced algorithms are a bit overkill if there are like 10-20 elements to be sorted. The overhead is big.
In linked lists, Merge sort can be performed in place, not using extra memory, since it's possible to merge nodes in a linked list in O(1) time without the use of an extra array. Quicksort however is worse. It uses lots of indexing, which is something linked structures are bad at.
Choosing a simple algorithm, Selection sort is usually never the best choice since it always performs in O(N^2) time. Bubble sort and Insertion sort have best case O(N) and the same worst case as Selection sort.
Insertion sort on the other hand does not perform well on a singly linked list since we cannot move backwards, only forward. Bubble sort works fine. For a doubly linked list, both performs well.

How to insert the element in a direct sorting order in O(1) time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
What is considered an optimal data structure for pushing elements in sorted order. Here i am looking for some idea or our own customize data structure using that i can achieve insertion of each element in O(1) time and it should be sorted. I do not want to use binary search or tree or linkedlist to make it done.
Values range would be till 50,000 and it can be insert in any random order. After each insert my test case will check data structure is sorted or not. So i have to sort after each insert.
Please share your suggestions and views on this. How can i achieve this inserting sorting order with O(1).
Thanks
If you could do insertion in O(1) time, then you could solve for sorting a list of n elements in O(n) time. But that problem has been proven to be O(n log n), so the original assumption, that insertion can be done in O(1), is wrong.
If you are dealing with integers, the closest you can get to your requirements is by using a Van Emde Boas tree.
You can't get pure O(1). Either you have to do a binary search, or move elements around, or find the right place in a tree.
Hash tables will not keep your elements sorted in any way, at least with VEB trees you have the FindNext methods.
The only "sorting" you can do in O(1) is to use your sort keys as direct indexes into an array, which becomes impractical or plain impossible as soon as your keys can vary in too broad a range.
Maybe "Bucket sort" will fulfill your requirement of O(1) insertion in sorted list, limited value range & insert with random order.
For example, you can split 1~50,000 number to 10,000 buckets, then when you get a number N, you can push it in bucket n/5. after that, you just need to rerank the number in bucket n/5.
this is "nearly" O(1).

Sorting algorithm that depends on initial organization of data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am presently studying sorting algorithms. I have studied that quick sort algorithm depends on the initial organization of data. If the array is sorted, quick sort becomes slower. Is there any other sort which depends on the initial organization of data?
Of course. Insertion sort will be O(n) with the descending sorted input:
define selection_sort (arr):
out = []
while not (arr.is_empty()):
x = arr.pop()
out = insert x out
return out
because each insert call will be O(1). If pop_last() is used instead of pop() then it will be fastest on the sorted ascending input (this assumes pop() and/or pop_last() are O(1) themselves).
All fast sort algorithms minimize comparison and move operations. Minimizing move operations is dependent on the initial element ordering. I'm assuming you mean initial element ordering by initial organization.
Additionally, the fastest real world algorithms exploit locality of reference which which also shows dependence on the initial ordering.
If you are only interestend in a dependency that slows or speeds up the sorting dramatically, for example bubble sort will complete in one pass on sorted data.
Finally, many sort algorithms have average time complexity O(N log N) but worst case complexity O(N^2). What this means is that there exist specific inputs (e.g. sorted or reverse sorted) for these O(N^2) algorithms that provoke the bad run time behaviour. Some quicksort versions are example of these algorithms.
If what you're asking is "should I be worried about which sorting algorithm should I pick on a case basis?", unless you're processing thousands of millions of operations, the short answer is "no". Most of the times quicksort will be just fine (quicksort with a calculated pivot, like Java's).
In general cases, quicksort is good enough.
On the other hand, If your system is always expecting the source data in a consistent initial sorted way, and you need long CPU time and power each time, then you should definitely find the right algorithm for that corner case.

why processing 'optimize sort' takes less time than normal sort? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
As Everybody know that processing a sort who is optimize takes less time than normal sort? After knowing that optimize a sort takes less time, first thought that comes to my mind is "What makes is Faster"?
The primary reason an optimized sort runs faster is that it reduces the number of times two elements are compared and/or moved.
If you think about a naive worst-case sort of N items, where you compare each of the N items to each of the N-1 other items, there is a total of N*(N-1)=N^2-N comparisons. One possible way of optimizing the process would be to sort a smaller list - say N/2. That would take N/2*(N/2-1) comparisons, or N^2/4-N/2. But of course, you would have to do that on both halves of the list, which would double that - so N^2/2-N comparisons. You would also need N/2 additional comparisons to merge the two halves back together, so the total number of comparisons to sort two half-sized lists and merge them back together would be N^2/2-N+N/2, or N^2/2-N/2 - half the number of comparisons for sorting the whole list. Recursively subdividing your lists down to 2 items that can be sorted with just one comparison can provide significant savings in the number of comparisons. There are other divide-and-conquer ideas as well as ways to exploit discovered symmetries in the lists to help optimize the sorting algorithm, and coupled with the different ways to arrange data in memory (arrays vs linked lists vs heaps vs trees.... etc.) this leads to different optimized algorithms.

Sorting and searching relation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lets say we want to find some known key at array and extract the value. There are 2 possible approaches(maybe more?) to do it. Linear approach, during which we will compare each array key with needle O(N). Or we can sort this array O(N*log(N)) and apply binary search O(log(N)). And I have several questions about it.
So, as I can see sort is closely related to search but stand alone sort is useless. Sorting is an instrument to simplify search. Am I correct? Or there any other implementations of sorting?
If we will talk about search, than we can do search on unsorted data O(N) and sorted O(N*log(N)) + O(log(N)). Searching can exist separately from sorting. In case when we need to find something at array only once we should use linear search, if the search is repeated we should sort the data and after it perform searching?
Don't think before every search a O(n * lg(n)) sort is needed. That would be ridiculous because O(n * lg(n)) + O(log(n)) > O(n) that is it would be quicker to do a linear search on random order data which on average would be O(n/2).
The idea is to initially sort your random data only once using a O(n * lg(n)) algorithm then any data added prior to sorting should be added in order so every search there after can be done in O(lg(n)) time.
You might be interesting in looking at hash tables which are a kind of array that are unsorted but have O(1) constant access time.
It is extremely rare that you would create an array of N items then search it only once. Therefore it is usually profitable to improve the data structure holding the items to improve sort time (amortize the set up time over all the searches and see if you save over-all time)
However there are many other considerations: Do you need to add new items to the collection? Do you need to remove items from the collection? Are you willing to spend extra memory in order to improve sort time? Do you care about the original order in which the items were added to the collection? All of these factors, and more, influence your choice of container and searching technique.

Resources