Traceless in place selection sort - algorithm

You are given an arrayA[1..n] of length n with each cell containing a〈height,weight〉pair. All height values are distinct, and so are all weight values. The array is sorted in increasing order of the height values.Your task is to design a recursive divide-and-conquer algorithm that given an integer k∈[1,n],finds the entry with the kth smallest weight value. You are allowed to use only O(1) extra space in every level of recursion. Though your algorithm is permitted to reorder the entries of A if required,it must restore the original order of the entries before termination. Your algorithm must run in Θ(n) time.
The algorithm I can think of is selection sort, but i am not able to do it in the time and space complexity asked. Any help or direction would be appreciated.

If the algorithm may be O(n) in average you may use quicksort. Quicksort can find k-th element in O(n) in average if used till the pivot is proved to be the k-th element itself (O(n^2) worst case and O(n) in average).
Now the tricky part: you need to restore the original array. It is simple for the first iteration of the partition procedure on the sorted array: just run the procedure in opposite direction and reconstruct the initial array. Now the recursive idea emerges: if the pivot happens to be larger than k-th element, you still have left subarray sorted: you may repeat the procedure, find the k-th and restore the array. But if you find that the pivot is lower than k-th element... then restore the array and run the quicksort "reflected" (where you start moving from right to the left). In this case the right subarray from the pivot would be sorted. Repeat procedure recursively.

Related

Comparison-based algorithm that pairs the largest element with the smallest one in linear time

Given an array of integers. I want to design a comparison-based algorithm
that pairs the largest element with the smallest one, the second largest one with the second smallest one and so on. Obviously, this is easy if I sort the array, but I want to do it in O(n) time. How can I possibly solve this problem?
Well i can prove that it does not exists.
Let`s proof by contradiction: suppose there was such algorithm
When we could get an array of kth min and kth max pairs.
We could when get sorted array by taking all mins in order then all max in order,
so we could get original array sorted in O(n) steps.
So we could get a comparision based sorting algorithm that sorts in O(n)
Yet it can be proven that comparision based sorting algorithm must take atleast n
log n steps. (many proofs online. i.e. https://www.geeksforgeeks.org/lower-bound-on-comparison-based-sorting-algorithms/)
Hence we have a contradiction so such algortihm does not
exist.

Max number of elements which sum is less than equal target in unsorted array (without sorting)

I am working on an algorithm question which is as follows:
An array of (elements) real numbers in arbitrary unsorted order, find the maximum number of elements whose sum is less than equal to Target value in O(n) time. Therefore no sorting is allowed.
I think I should use Randomized QuickSelect algorithm which find the Kth element in unsorted array in O(n). I want to know what is the right modification in Randomized QuickSelect algorithm in order to use for my question?
In Randomized QuickSelect algorithm, array is partitioned against a randomized position value. Lower part partition contains all the values less than this randomized position value and upper partition contains all the value greater than this randomized position value. For your problem, you will calculate the sum of this lower partition array. Now if the sum is less than the target value, then choose a random position value from the upper partition and run the partition algorithm again. If the sum is greater than the target value, choose a random position value from the lower partition and run the partition algorithm again. Continue it until you find the answer. Note, in worst case this could be O(n^2).

Incorrect Worst-case time complexity of Insertion Sort

Hi I am new to algorithms and am quite fascinated by it.
I am trying to figure out worst case time complexity of insertion sort and it is mentioned as O(n**2). Instead, we can have the time complexity as O(N*logN).
Here is my explanation,
THe insertion sort looks at the 1st element and assumes it is sorted. Next it looks at the 2nd element and compares with the predecessor sorted sublist of 1 element and inserts it based on the comparison with elements in the predecessor sorted sublist. This process is repeated similarly.
Everywhere it is mentioned that to insert an element into the predecessor sorted sublist, basically linear search,it takes O(N) time and as we have do these operations for n elements it takes us O(N**2).
However, if we use binary insertion to insert the element into predecessor sublist it should take O(logn) time where n is the length of sublist. Basically compare the new element with the middle element of predecessor sorted sublist and if it is greater than the middle element then new element lies between the middle element and the last element of the sublist.
As we repeat the operations for n items it should take us O(N*logN). We can use binary search approach as we know the predecessor sublist is sorted.
So shouldn't the worst case time complexity be O(N*logN) instead of O(N**2).
Yes, you can find the insertion point in O(log n), but then you have to make space to insert the item. That takes O(n) time.
Consider this partially-sorted array:
[1,2,3,5,6,7,9,4]
You get to the last item, 4, and you do a binary search to locate the position where it needs to be inserted. But now you have to make space, which means moving items 9, 7, 6, and 5 down one place in the array. That's what makes insertion sort O(n^2).

Get the k smallest elements of an array using quick sort

How would you find the k smallest elements from an unsorted array using quicksort (other than just sorting and taking the k smallest elements)? Would the worst case running time be the same O(n^2)?
You could optimize quicksort, all you have to do is not run the recursive potion on the other portions of the array other than the "first" half until your partition is at position k. If you don't need your output sorted, you can stop there.
Warning: non-rigorous analysis ahead.
However, I think the worst-case time complexity will still be O(n^2). That occurs when you always pick the biggest or smallest element to be your pivot, and you devolve into bubble sort (i.e. you aren't able to pick a pivot that divides and conquers).
Another solution (if the only purpose of this collection is to pick out k min elements) is to use a min-heap of limited tree height ciel(log(k)) (or exactly k nodes). So now, for each insert into the min heap, your maximum time for insert is O(n*log(k)) and the same for removal (versus O(n*log(n)) for both in a full heapsort). This will give the array back in sorted order in linearithmic time worst-case. Same with mergesort.

Sorting Algorithms with constraints on Array

I am trying to come up with an algorithm that sorts and array A in O(nlog(logn)) time.
Where A[0...n-1] with the property A[i] >= A[i-j] for all j >= log(n).
So far I have thought to partition A into blocks that are each logn size.
Then I think that the first block will be be strightly smaller then blocks that come after it?
I think I'm missing part of it.
Tree Sort would be an option here. You start at the left end of your array and feed elements into the tree. Whenever your tree has more than log(n) elements you take the smallest element out, because you know for sure that all subsequent elements are larger, and put it back into the sorted array. This way the tree size is always log(n), and the cost of a tree operation is log(log(n)). In fact you only need the operations (1)insert random element and (2) remove smallest element, so you don't need necessarily a tree, but any sort of priority queue would do for that purpose. This way both average and worst-case performance meet your requirements.

Resources