LinkedList divide and merge in reverse order - algorithm

I have a linked list as 1->2->3->4->5->6
I need to change it to 1->6->2->5->3->4
i.e.last element linked to first element, last second element linked to the second element and so on.
I used 2 pointers, one fast and one slow. Once I reach the center, I put all the elements in the second half in a stack. [4, 5, 6]
Now, using a third pointer, I traverse the original linked list and insert node from the stack i.e. pop [6, 5, 4]
Is there any better solution than this?

I think this is optimal.
I use 2 pointers. One slow, one jump at a time and other fast, 2 jumps at a time.
Hence, I find the center and also the mid count.
Now, from the center till the end I reverse the linked list.
I have 2 linked lists now, one from start to center and other reversed from center to end.
Simply take one element from list 1 and link the element from link 2 to this and increment both the lists.
No extra space required and time complexity is O(N)

Related

Finding the kth last element of a singly linked list: answer explanation

When you need to find the kth last element of a singly linlked list, the usual naive approach is to perform two passes. The first to find the length of the list and the second to iterate until the (length-k)th element.
Whereas the optimized version takes advantage of two pointers:
p1 refering to the head of the list
p2 being kth elements ahead of p1
This allows us to return p1's element when p2 reaches the end of the list.
I don't understand why the second approach is faster than the first when in both cases we have one pointer iterating all over the list and another until the (length-k)th element.
Is it due to cache optimization?
Thanks.
If you keep p2 exactly k elements behind p1, then it doesn't really help much, since you have to do the same number of traversals all together.
You can optimize the procedure by using more pointers, though.
As you walk though the list, lets say you remember the pointer at every (k/m)th position, for some m. You only need to remember the last m+1 of those pointers. Then, when you get to the end of the list, instead of iterating again from the beginning, start at the oldest pointer you remembered. It will be between k and k + (k/m) elements behind the end, so you only have to move it forward by at most k/m positions.
Consider non-uniform memory access times and a singly linked list of length n:
- in the counted iteration approach, accesses to the same node will be n accesses apart
- in the lagging pointer approach, accesses to the same node will be k accesses apart
With an LRU cache (/with each LRU cache level), the former is more likely to induce capacity misses than the latter.

Quick sorting algorithm states using middle element as pivot

I need help understanding exactly how the quick sort algorithm works. I've been watching teaching videos and still fail to really grasp it completely.
I have an unsorted list: 1, 2, 9, 5, 6, 4, 7, 8, 3
And I have to quick sort it using 6 as the pivot.
I need to see the state of the list after each partition procedure.
My main problem is understanding what the order of the elements are before and after the pivot. So in this case if we made 6 the pivot, I know the numbers 1 - 5 will be before 6 and 7 - 9 will go after that. But what will the order of the numbers 1 - 5 be and 7 - 9 be in the first partition given my list above?
Here is the partition algorithm that I want to use (bear in my I'm using the middle element as my initial pivot):
Determine the pivot, and swap the pivot with the first element of the list.
Suppose that the index smallIndex points to the last element smaller than the pivot. The index smallIndex is initialized to the first element of the list.
For the remaining elements in the list (starting at the second element)
If the current element is smaller than the pivot
a. Increment smallIndex
b. Swap the current element with the array element pointed to by smallIndex.
Swap the first element, that is the pivot, with the array element pointed to by smallIndex.
It would be amazing if anyone could show the list after each single little change that occurs to the list in the algorithm.
It doesn't matter.
All that matters - all that the partitioning process asserts - is that, after it has been run, there are no values on the left-hand side of the center point that emerges that are greater than the pivot and that there are no values on the right-hand side that are less than the pivot value.
The internal order of the two partitions is then handled in the subsequent recursive calls for each half.

Quicksort: pivot position after one partition

I am reading about quicksort, looking at different implementations and I am trying to wrap my head around something.
In this implementation (which of course works), the pivot is chosen as the middle element and then the left and right pointer move to the right and left accordingly, swapping elements to partition around the pivot.
I was trying the array [4, 3, 2, 6, 8, 1, 0].
On the first partition, pivot is 6 and all the left elements are already smaller than 6, so the left pointer will stop at the pivot. On the right side, we will swap 0 with 6, and then 1 and 8, so at the end of the first iteration, the array will look like:
[4, 3, 2, 0, 1, 8, 6].
However, I was under the impression that after each iteration in quicksort, the pivot ends up in its rightful place, so here it should end up in position 5 of the array.
So, it is possible (and ok) that the pivot doesn't end up in its correct iteration or is it something obvious I am missing?
There are many possible variations of the quicksort algorithm. In this one it is OK for the pivot to be not in its correct place in its iteration.
The defining feature of every variation of the quicksort algorithm is that after the partition step, we have a part in the beginning of the array, where all the elements are less or equal to pivot, and a non-overlapping part in the end of the array where all the elements are greater or equal to pivot. There may also be a part between them, where every element is equal to pivot. This layout ensures, that after we sort the left part and the right part with recursive calls, and leave the middle part intact, the whole array will be sorted.
Notice, that in general elements equal to pivot may go to any part of the array. A good implementation of quicksort, that avoids quadratic time for the most obvious case, i.e. all equal elements, must spread elements equal to pivot between parts rationally.
Possible variants include:
The middle part includes only 1 element: the pivot. In that case pivot takes its final place in the array after the partition and won't be used in the recursive calls. That's what you meant by pivot taking its place in its iteration. For this approach the good implementation must move about half the elements equal to pivot to the left part and the other half to the right part, otherwise we would have quadratic time for an array with all equal elements.
There is no middle part. Pivot and all elements equal to it are spread between the left and the right part. That's what the implementation you linked does. Once again, in this approach about half of the elements equal to pivot should go to the left part, and the other half to the right part. This can also be mixed with the first variation, depending on whether we are sorting an array with an odd or an even number of elements.
Every element equal to pivot goes to the middle part. There are no elements equal to pivot in either left or right part. That's quite efficient and that's the example Wikipedia gives for solving the all-elements-equal problem. Arrays with all elements equal to each other are sorted in linear time in that case.
Thus, the correct and efficient implementation of quicksort is quite tricky (there is also a problem of choosing a good pivot, for which several approaches with different tradeoffs exist as well; or an optimisation of switching to another non-recursive sorting algorithm for smaller sub-array sizes).
Also, it seems that the implementation you linked to, may do recursive calls on overlapping subarrays:
if (i <= j) {
exchange(i, j);
i++;
j--;
}
For example, when i is equal to j, those elements will be swapped, and i will become greater than j by 2. After that 3 elements will overlap between the ranges of the following recursive calls. The code still seems to work correctly though.

Heapsort using multiple heaps

I found a variant of Heapsort using multiple heaps at http://students.ceid.upatras.gr/~lebenteas/Heapsort-using-Multiple-Heaps-final.pdf. The solution proposes that instead of the traditional Heapsort algorithm, where after each swap, we do another siftdown to bring the highest value in the current heap to the root, we can do some other things. However, what exactly do they mean by 'other things', I cannot understand.
For example, at one point they say We “forget”, for the time being, the existence of the root. That surely means we are currently stalling the swapping of the highest element with the last element of the heap. However, just after some lines, they say So far, two elements have been transferred in the sorted part of the heap., which runs counter to the proposition that the swapping hasn't been done yet. Also in the figure in page 97, the node with value 1 is missing, I don't know how.
Can anybody give me an idea of what exactly is the authors trying to convey, and how worthwhile can it be?
(The line you asked about is in section 2.3, so I will explain the variation of heapsort which is proposed in section 2.3:)
When the author says we "forget" the existence of the root, this does not mean that they are stalling the swapping of the highest element. The swap is done, but they temporarily delay rebuilding the heap. After swapping the highest element into the root position, they compare the roots of the 2 subheaps, and swap one or the other with the next-highest element. Then, after doing 2 swaps (rather than 1), they rebuild the heap.
Then they take this idea a step further in sections 3 and 4, and propose another variant of heapsort, which uses more than one heap.
How do you keep more than one heap in an array? (To make it concrete, let's talk about 2 heaps.) Well, how do you keep a single heap? The root goes at index 0, its children are at 1 and 2, then the children of the left subheap are at 3 and 4, etc., right?
To put 2 heaps together in an array, keep the 2 roots at 0 and 1. The children of the first root go at 2 and 3, then the children of the 2nd root at 4 and 5... with such an arrangement, it is still possible to navigate up and down the tree by doing simple arithmetic operations on indexes.
The standard heapsort repeats 2 steps: swap the root with the last element in the "heap" area, then siftDown to rebuild the heap. This heapsort repeats the following 3 steps: compare the 2 roots to see which one is bigger, swap that one with the last element in the "heap" area, then call siftDown on the appropriate heap.
This requires an extra compare at each step, but the siftDown operations work on slightly shallower heaps, which saves more than a single compare.

How worthwhile is this optimization for Heapsort?

The traditional Heapsort algorithm swaps the last element of the heap with the root of the current heap after every heapification, then continues the process again. However, I noticed that it is kind of unnecessary.
After a heapification of a sub-array, while the node contains the highest value (if it's a max-heap), the next 2 elements in the array must follow the root in the sorted array, either in the same order as they are now, or exchanging them if they are reverse-sorted. So instead of just swapping the root with the last element, won't it be better to swap the first 3 elements (including the node and after the if necessary exchange of the 2nd and 3rd elements) with the last 3 elements, so that 2 subsequent heapifications (for the 2nd and 3rd elements ) are dispensed with?
Is there any disadvantage with this method (apart from the if-needed swapping of the 2nd and 3rd elements, which should be trivial)? If not, if it is indeed better, how much performance boost will it give? Here is the pseudo-code:
function heapify(a,i) {
#assumes node i has two nodes, both heaps. However node i itself might not be a heap, i.e one of its children may be greater than its value.
#if so, find the greater of its two children, then swp the parent with that value.
#this might render that child as no longer a heap, so recurse
}
function create_heap(a) {
#all elements following index |_n/2_| are leaf nodes, thus heapify() should be applied to all elements within index 1 to |_n/2_|
}
function heapsort(a) {
create_heap(a); #a is now a max-heap
#root of the heap, that is a[1] is the maximum, so swap it with a[n].
#The root now contains an element smaller than its children, both of which are themselves heaps.
#so apply heapify(a,1). Note: heap length is now n-1, as a[n] is the largest element already found
#now again the array is a heap. The highest value is in a[1]. Swap it with a[n-1].
#continue
}
Suppose the array is [4,1,3,2,16,9,10,14,8,7]. After running a heapify, it will become [16,14,10,8,7,9,3,2,4]. Now the heapsort's first iteration will swap 16 and 4, leading to [4,14,10,8,7,9,3,2,16]. Since this has now rendered the root of the new heap [4,14,10,8,7,9,3,2] as, umm, un-heaped, (14 and 10 both being greater than 4), run another heapify to produce [14,8,10,4,7,9,3,2]. Now 14 being the root, swap it with 2 to yield [2,8,10,4,7,9,3,14], thus making the array currently [2,8,10,4,7,9,3,14,16]. Again we find that 2 is un-heaped, so again doing a heapify makes the heap as [10,8,9,4,7,2,3]. Then 10 is swapped with 3, making the array as [3,8,9,4,7,2,3,10,14,16]. My point is that instead of doing the 2nd and 3rd heapifications to store 10 and 14 before 16, we can tell from the first heapification that because 10 and 14 follow 16, they are the 2nd and 3rd largest elements (or vice-versa). So after a comparison between them (in case they are already sorted, 14 comes before 10), I swap all the there (16,14,10) with (3,2,4), making the array [3,2,4,8,7,9,16,14,10]. This reduces us to a similar condition as the one after the further two heapifications - [3,8,9,4,7,2,3,10,14,16] originally, as compared to [3,2,4,8,7,9,16,14,10] now. Both will now need further heapification, but the 2nd method has let us arrive at this juncture directly by just a comparison between two elements (14 and 10).
The second largest element of the heap is present in the second or third position, but the third largest can be present further down, at depth 2. (See the figure in http://en.wikipedia.org/wiki/Heap_(data_structure) ). Furthermore, after swapping the first three elements with the last three, the heapify method would first heapify the first subtree of the root, followed by the second subtree of the root, followed by the whole tree. Thus the total cost of this operation is close to three times the cost of swapping the top element with the last and calling heapify. So you won't gain anything by doing this.

Resources