Quick Sort Understanding in this case - data-structures

I am revising the quick sort algorithm however it is prooving to be a bit more complex that I thought.
Suppose My Array has the following A = {7,1,5,8,2,0}
Now I select my pivot as the index 2 of the array and it has the value 5. (Eventually all elements less than 5 would be on LHS and elements greater would be on RHS)
Now I start moving from left (index 0) towards right(index 2) till I reach a value that is greater than 5. If a value on the left side is greater than the pivot value 5 then it needs to move to the right side. For it to move to the right side it requires an empty slot so that both the value can be interchanged. So the first interchange gives me the array
A = {0,1,5,8,2,7}
Now two elements still remain on the left side the 2 and 7 (The right side also moves towards the pivot - leftwards and if it is less than th epivot it is suppose to move to the other side).
Now here is the question what would happen if there is no slot in the right side and an element on the left side needs to be moved to the right side of the pivot ? Am I missing something ?

Well, the "partition" step you're tazlking about, can be implemented in various ways.
The easiest way to implement is imo this way:
1) Pick a pivot element
2) Move the pivot element as the most rightmost element
3) Do a left scan and stack all the elements that are smaller than pivot sequentially.
4) Finally you know how many elements are smaller -> do the final swap to make sure pivot element ends up in the correct place.
I've taken this from the wiki, and added number steps to the code, just to make it clear.
// left is the index of the leftmost element of the subarray
// right is the index of the rightmost element of the subarray (inclusive)
// number of elements in subarray = right-left+1
partition(array, left, right)
pivotIndex := choosePivot(array, left, right) // step 1
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right] // step 2
storeIndex := left
for i from left to right - 1 // step 3
if array[i] < pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // step 4
return storeIndex

the basic idea of quick sort is ,
you choose a pivot element, and try to place all the elements less than pivot left to pivot element , and greater than or equal to to the right. this process happens recursively.
As yo have chosen 5 , a point from left and other from right moves on towards each other comparing each element with pivot, and if these two pointers cross over you swap left pointer with pivot.
In the first case , you have swapped 0 and 7 , which is fine ,but now the index advances from one point , now the left pointer points to element 1, and right pointer to 2 . Right pointer stops at 2 as it is less than pivot 5., left pointer comes till 8 and swaps 8 and 2. the pointer advance one more time, the left pointer cross over the right pointer , hence it swaps with 2.
now if you see, 5 is in its correct place.
it would look like
0,1,2,5,8,7
link useful: https://www.youtube.com/watch?v=8hHWpuAPBHo
Algorithm:
// left is the index of the leftmost element of the subarray
// right is the index of the rightmost element of the subarray (inclusive)
// number of elements in subarray = right-left+1
partition(array, left, right)
pivotIndex := choosePivot(array, left, right)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right]
storeIndex := left
for i from left to right - 1
if array[i] < pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // Move pivot to its final place
return storeIndex

Related

Is this wikipedia code for quickselect incorrect?

I was reading the quickselect algorithm on wikipedia: https://en.wikipedia.org/wiki/Quickselect
function select(list, left, right, k)
if left = right // If the list contains only one element,
return list[left] // return that element
pivotIndex := ... // select a pivotIndex between left and right,
// e.g., left + floor(rand() % (right - left + 1))
pivotIndex := partition(list, left, right, pivotIndex)
// The pivot is in its final sorted position
if k = pivotIndex
return list[k]
else if k < pivotIndex
return select(list, left, pivotIndex - 1, k)
else
return select(list, pivotIndex + 1, right, k - pivotIndex)
Isn't the last recursive call incorrect? I believe it the last argument should just be k rather than k - pivotIndex. Am I missing something here?
You are right - the last correction from September 20 introduced this error.
Top comment says that
// Returns the k-th smallest element of list within left..right inclusive
// (i.e. left <= k <= right).
and k is defined over all index range, it is absolute, not relative to local low border, as you noticed in comment.
Aslo checked my implementation of kselect, it uses k in the second call.

Meaning of L+k-1 index in Quickselect algorithm

I am studying quickselect for a midterm in my algorithms analysis course and the algorithm I have been working with is the following:
Quickselect(A[L...R],k)
// Input: Array indexed from 0 to n-1 and an index of the kth smallest element
// Output: Value of the kth position
s = LomutoPartition(A[L...R]) // works by taking the first index and value as the
// pivot and returns it's index in the sorted position
if(s == k-1) // we have our k-th element, it's k-1 because arrays are 0-indexed
return A[s]
else if(s> L+k-1) // this is my question below
Quickselect(L...s-1,k) // basically the element we want is somewhere to the left
// of our pivot so we search that side
else
Quickselect(s+1...R, k-1-s)
/* the element we want is greater than our pivot so we search the right-side
* however if we do we must scale the k-th position accordingly by removing
* 1 and s so that the new value will not push the sub array out of bounds
*/
My question is why in the first if do we need L + k - 1? Doing a few examples on paper I have come to the conclusion that no matter the context L is always an index and that index is always 0. Which does nothing for the algorithm right?
There seems to be a discrepancy between the line
if(s == k-1)
and the line
else if(s> L+k-1)
The interpretations are incompatible.
As Trincot correctly notes, from the second recursive call on, it's possible that L is not 0. Your Lomuto subroutine doesn't take an array, a low index, and a high index (as the one in Wikipedia does, for example). Instead it just takes an array (which happens to be a subarray between low and hight of some other array). The index s it returns is thus relative to the subarray, and to translate it to the position within the original array, you need to add L. This is consistent with your first line, except that the line following it should read
return A[L + s]
Your second line should therefore also compare to k - 1, not L + k - 1.
Edit
Following the comment, here is the pseudo-code from Wikipedia:
// Returns the n-th smallest element of list within left..right inclusive
// (i.e. left <= n <= right).
// The search space within the array is changing for each round - but the list
// is still the same size. Thus, n does not need to be updated with each round.
function select(list, left, right, n)
if left = right // If the list contains only one element,
return list[left] // return that element
pivotIndex := ... // select a pivotIndex between left and right,
// e.g., left + floor(rand() % (right - left + 1))
pivotIndex := partition(list, left, right, pivotIndex)
// The pivot is in its final sorted position
if n = pivotIndex
return list[n]
else if n < pivotIndex
return select(list, left, pivotIndex - 1, n)
else
return select(list, pivotIndex + 1, right, n)
Note the conditions
if n = pivotIndex
and
else if n < pivotIndex
which are consistent in their interpretation of the indexing returned in partitioning.
Once again, it's possible to define the partitioning sub-routine either as returning the index relative to the start of the sub-array, or as returning the index relative to the original array, but there must be consistency in this.

Quicksort efficiency: does direction of scan matter?

Here is my implementation of an in-place quicksort algorithm, an adaptation from this video:
def partition(arr, start, size):
if (size < 2):
return
index = int(math.floor(random.random()*size))
L = start
U = start+size-1
pivot = arr[start+index]
while (L < U):
while arr[L] < pivot:
L = L + 1
while arr[U] > pivot:
U = U - 1
temp = arr[L]
arr[L] = arr[U]
arr[U] = temp
partition(arr, start, L-start)
partition(arr, L+1, size-(L-start)-1)
There seems to be a few implementations of the scanning step where the array (or current portion of the array) is divided into 3 segments: elements lower than the pivot, the pivot, and elements greater than the pivot. I am scanning from the left for elements greater than or equal to the pivot, and from the right for elements less than or equal to the pivot. Once one of each is found, the swap is made, and the loop continues until the left marker is equal to or greater than the right marker. However, there is another method following this diagram that results in less partition steps in many cases. Can someone verify which method is actually more efficient for the quicksort algorithm?
Both the methods you used are basically the same . In the above code
index = int(math.floor(random.random()*size))
Index is chosen randomly, so it can be first element or the last element. In the link https://s3.amazonaws.com/hr-challenge-images/quick-sort/QuickSortInPlace.png they Initailly take the last element as pivot and Move in same way as you do in the code.
So both methods are same. In your code you randomly select the pivot, In the Image - you state the Pivot.

Why the different ways of calling quicksort recursively?

I've noticed a discrepancy in the way quicksort is called recursively.
One way is
quicksort(Array, left, right)
x = partition(Array, left, right)
quicksort(Array, left, x-1)
quicksort(Array, x+1, right)
partition(array, left, right)
pivotIndex := choose-pivot(array, left, right)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right]
storeIndex := left
for i from left to right - 1
if array[i] ≤ pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right] // Move pivot to its final place
return storeIndex
[EXAMPLE]
This makes sense because quicksort works by partitioning the other elements around the pivot so the element Array[x] should be in its final position. Therefore the range [left, partion-1] and [partition+1, right] remains.
The other way
quicksort(Array, left, right)
x = partition(Array, left, right)
quicksort(Array, left, x)
quicksort(Array, x+1, right)
PARTITION(A,p,r)
x A[p]
i p - 1
j r + 1
while TRUE
do repeat j j - 1
until A[j] x
repeat i i + 1
until A[i] x
if i < j
then exchange A[i] A[j]
else return j
[EXAMPLE]
Notice the -1 is missing. These seems to suggest that the array was partitioned correctly but no single element is in its final position. These two ways are not interchangeable, if I put in a -1 in the second way an input array is improperly sorted.
What causes the difference? Obviously it's somewhere in the partition method, does it have to do with Hoare's or Lumuto's algorithm was used?
There is not actually that much difference in efficiency between the two versions, except when operating on the smallest arrays. The majority of the work is done in separating one large array of size n, whose values can be at many as n spaces away from their proper positions, into two smaller arrays which, being smaller, cannot have values as far displaced from their proper positions, even in the worst case. The "one way" essentially creates three partitions at each step - but since the third one is just one space large, it only makes an O(1) contribution towards the progress of the algorithm.
That being said, it's very easy to implement that final switch, so I'm not sure why the code of your "other way" example doesn't take that step. They even point out a pitfall (if the last rather than the first element is chosen for the pivot, the recursion never ends) which would be avoided entirely by implementing that switch that eliminates the pivot element at the end. The only situation I can imagine where that would be the preferable code to use would be where code space was at an absolute premium.
If nothing else, excluding or passing the partition index might be the difference between closed and half-open intervals: right might be the first index not to touch - no telling from incomplete snippets without references.
The difference is caused because the return value of partition() means different things.
In One way, the return value of partition() is where the pivot that was used for the partition ended up in i.e. Array[x] after parition() is the pivot that was used in partition().
In Other way, the return value of partition() is NOT where the pivot that was used for the partition ended up in i.e. Array[x] after partition() is an element that was less than the pivot that was used in partition(), but we don't know much other than that. The actual pivot could be located anywhere in the upper half of the array.
From this it follows that the first recursive call with x-1 instead of x in the Other way could quite easily give incorrect results e.g. pivot = 8, Array[x] = 5 and Array[x-1] = 7.
If you think about it, the other way would not make any difference to the algorithm. If the partition algorithm is the same as in the first one, then including the pivot in one of the sub-arrays would not have any effect, since in that case none of the other elements would swap its place with the pivot in the sub array.
At most it would increase the number of comparisons by some number. Although I'm unsure if it would adversely affect the sorting time for large arrays.

Selection algorithm to find the median, elements to the left and to the right

Suppose I had an unsorted array A of size n.
How to find the n/2, n/2−1, n/2+1th smallest element from the original unsorted list in linear time?
I tried to use the selection algorithm in wikipedia (Partition-based general selection algorithm is what I am implementing).
function partition(list, left, right, pivotIndex)
pivotValue := list[pivotIndex]
swap list[pivotIndex] and list[right] // Move pivot to end
storeIndex := left
for i from left to right-1
if list[i] < pivotValue
swap list[storeIndex] and list[i]
increment storeIndex
swap list[right] and list[storeIndex] // Move pivot to its final place
return storeIndex
function select(list, left, right, k)
if left = right // If the list contains only one element
return list[left] // Return that element
select pivotIndex between left and right //What value of pivotIndex shud i select??????????
pivotNewIndex := partition(list, left, right, pivotIndex)
pivotDist := pivotNewIndex - left + 1
// The pivot is in its final sorted position,
// so pivotDist reflects its 1-based position if list were sorted
if pivotDist = k
return list[pivotNewIndex]
else if k < pivotDist
return select(list, left, pivotNewIndex - 1, k)
else
return select(list, pivotNewIndex + 1, right, k - pivotDist)
But I have not understood 3 or 4 steps. I have following doubts:
Did I pick the correct algorithm and will it really work in linear time for my program. I am bit confused as it resembles like quick sort.
While Calling the Function Select from the main function, what will be the value of left, right and k. Consider my array is list [1...N].
Do I have to call select function three times, one time for finding n/2th smallest, another time for finding n/2+1 th smallest and one more time for n/2-1th smallest, or can it be done on a single call, if yes, how?
Also in function select (third step) "select pivotIndex between left and right", what value of pivotIndex should I select for my program/purpose.
Thanks!
It is like quicksort, but it's linear because in quicksort, you then need to handle both the left and right side of the pivot, while in quickselect, you only handle one side.
The initial call should be Select(A, 0, N, (N-1)/2) if N is odd; you'll need to decide exactly what you want to do if N is even.
To find the median and left/right, you probably want to call it to find the median, and then just do the max of the components of the array to the left and the min of the components to the right, because you know once the median selection phase is done that all elements to the left of the median will be less than it and to the right will be greater (or equal). This is O(n) + n/2 + n/2 = O(n) total time.
There are lots of ways to choose pivot indices. For casual purposes, either the middle element or a random index will probably suffice.

Resources