Partitioning an array not coming out correctly - sorting

I'm trying to partition an array so that each element in the first half of the array is less than each element in the second half of the array. This is the same partition algorithm that is used in quick sort. For some reason I can get the array A = [2, 8, 7, 1, 3, 5, 6, 4] to work but A = [7, 3, 6, 1, 9, 5, 4, 8] will not work.
def partition(A):
x = A[len(A)-1]
i = -1
for j in range (0, len(A)-2):
if A[j]<=x:
i = i + 1
# exchange A[j] and A[i]
jValue = A[j]
A[j] = A[i]
A[i] = jValue
# exchange A[len(A)-1] and A[i+1]
rValue = A[len(A)-1]
A[len(A)-1] = A[i+1]
A[i+1] = rValue
print(A)

The issue is the code needs to pick a pivot that represents the median of the array. This could be done using quick select or something similar. Note that quick select has a worst case time complexity of O(n^2). Wiki article:
http://en.wikipedia.org/wiki/Quickselect

Related

Shuffle an int array such that array elements in even indices are smaller than array elements in odd indices

I need to have all the elements in the even indices arr[0],arr[2],arr[4] etc be smaller than the elements with odd indices arr[1],arr[3],arr[5], etc
My approach was to find the MEDIAN and then write out all elements smaller than the median in odd indices and all elements larger than the median in even places.
Question: is there a way to do the array shuffling IN PLACE after finding the median ?
import random
def quickselect(items, item_index):
def select(lst, l, r, index):
# base case
if r == l:
return lst[l]
# choose random pivot
pivot_index = random.randint(l, r)
# move pivot to beginning of list
lst[l], lst[pivot_index] = lst[pivot_index], lst[l]
# partition
i = l
for j in range(l+1, r+1):
if lst[j] < lst[l]:
i += 1
lst[i], lst[j] = lst[j], lst[i]
# move pivot to correct location
lst[i], lst[l] = lst[l], lst[i]
# recursively partition one side only
if index == i:
return lst[i]
elif index < i:
return select(lst, l, i-1, index)
else:
return select(lst, i+1, r, index)
if items is None or len(items) < 1:
return None
if item_index < 0 or item_index > len(items) - 1:
raise IndexError()
return select(items, 0, len(items) - 1, item_index)
def shuffleArray(array, median):
newArray = [0] * len(array)
i = 0
for x in range(0,len(array),2):
newArray[x] = array[i]
i+=1
for y in range(1,len(array),2):
newArray[y] = array[i]
i+=1
return newArray
So here's my interpretation of the question.
Shuffle an array so that all data in even indices are smaller than all data in odd indices.
Eg
[1, 3, 2, 4] would be valid, but [1, 2, 3, 4] wouldn't be.
This stops us just being able to sort the array.
Sort the array, smallest to largest.
Split the array at its mid point (rounding the mid point down).
Shuffle the two arrays together. Such that given array [1, 2, 3] and array [4, 5, 6] it becomes [1, 4, 2, 5, 3, 6].
To elaborate on 3, here's some example code... (using javascript)
let a = [ 1, 2, 3 ];
let b = [ 4, 5, 6 ];
let c = [ ] // this will be the sorted array
for (let i = 0; i < a.length + b.length; i++ ) {
if(i % 2 == 0) c.push( a[Math.floor( i/2 )]);
else c.push( b[Math.floor( i/2 )]);
}
This produces the array [1, 4, 2, 5, 3, 6], which i believe fufils the requirement.

Partial Insertion Sort

Is it possible to sort only the first k elements from an array using insertion sort principles?
Because as the algorithm runs over the array, it will sort accordingly.
Since it is needed to check all the elements (to find out who is the smallest), it will eventually sort the whole thing.
Example:
Original array: {5, 3, 8, 1, 6, 2, 8, 3, 10}
Expected output for k = 3: {1, 2, 3, 5, 8, 6, 8, 3, 10} (Only the first k elements were sorted, the rest of the elements are not)
Such partial sorting is possible while resulting method looks like hybrid of selection sort - in the part of search of the smallest element in the tail of array, and insertion sort - in the part of shifting elements (but without comparisons). Sorting preserves order of tail elements (though it was not asked explicitly)
Ideone
void ksort(int a[], int n, int k)
{ int i, j, t;
for (i = 0; i < k; i++)
{ int min = i;
for (j = i+1; j < n; j++)
if (a[j] < a[min]) min = j;
t = a[min];
for (j = min; j > i; j--)
a[j] = a[j-1];
a[i] = t;
}
}
Yes, it is possible. This will run in time O(k n) where n is the size of your array.
You are better off using heapsort. It will run in time O(n + k log(n)) instead. The heapify step is O(n), then each element extracted is O(log(n)).
A technical note. If you're clever, you'll establish the heap backwards to the end of your array. So when you think of it as a tree, put the n-2i, n-2i-1th elements below the n-ith one. So take your array:
{5, 3, 8, 1, 6, 2, 8, 3, 10}
That is a tree like so:
10
3
2
3
5
6
8
1
8
When we heapify we get the tree:
1
2
3
3
5
6
8
10
8
Which is to say the array:
{5, 3, 8, 10, 6, 3, 8, 2, 1}
And now each element extraction requires swapping the last element to the final location, then letting the large element "fall down the tree". Like this:
# swap
{1*, 3, 8, 10, 6, 3, 8, 2, 5*}
# the 5 compares with 8, 2 and swaps with the 2:
{1, 3, 8, 10, 6, 3, 8?, 5*, 2*}
# the 5 compares with 3, 6 and swaps with the 3:
{1, 3, 8, 10, 6?, 5*, 8, 3*, 2}
# The 5 compares with the 3 and swaps, note that 1 is now outside of the tree:
{1, 5*, 8, 10, 6, 3*, 8, 3, 2}
Which in a array-tree representation is:
{1}
2
3
3
5
6
8
10
8
Repeat again and we get:
# Swap
{1, 2, 8, 10, 6, 3, 8, 3, 5}
# Fall
{1, 2, 8, 10, 6, 5, 8, 3, 3}
aka:
{1, 2}
3
3
5
6
8
10
8
And again:
# swap
{1, 2, 3, 10, 6, 5, 8, 3, 8}
# fall
{1, 2, 3, 10, 6, 8, 8, 5, 3}
or
{1, 2, 3}
3
5
8
6
8
10
And so on.
Just in case anyone needs this in the future, I came up with a solution that is "pure" in the sense of not being a hybrid between the original Insertion sort and some other sorting algorithm.
void partialInsertionSort(int A[], int n, int k){
int i, j, aux, start;
int count = 0;
for(i = 1; i < n; i++){
aux = A[i];
if (i > k-1){
start = k - 1;
//This next part is needed only to maintain
//the original element order
if(A[i] < A[k])
A[i] = A[k];
}
else start = i - 1;
for(j = start; j >= 0 && A[j] > aux; j--)
A[j+1] = A[j];
A[j+1] = aux;
}
}
Basically, this algorithm sorts the first k elements. Then, the k-th element acts like a pivot: only when the remaining array elements are smaller than this pivot, it is then inserted in the corrected position between the sorted k elements just like in the original algorithm.
Best case scenario: array is already ordered
Considering that comparison is the basic operation, then the number of comparisons is 2n-k-1 → Θ(n)
Worst case scenario: array is ordered in reverse
Considering that comparison is the basic operation, then the number of comparisons is (2kn - k² - 3k + 2n)/2 → Θ(kn)
(Both take into account the comparison made to maintain the array order)

How can the complexity of this function be decreased?

I got this function:
def get_sum_slices(a, sum)
count = 0
a.length.times do |n|
a.length.times do |m|
next if n > m
count += 1 if a[n..m].inject(:+) == sum
end
end
count
end
Given this [-2, 0, 3, 2, -7, 4] array and 2 as sum it will return 2 because two sums of a slice equal 0 - [2] and [3, 2, -7, 4]. Anyone an idea on how to improve this to a O(N*log(N))?
I am not familiar with ruby, but it seems to me you are trying to find how many contiguous subarrays that sums to sum.
Your code is doing a brute force of finding ALL subarrays - O(N^2) of those, summing them - O(N) each, and checking if it matches.
This totals in O(N^3) code.
It can be done more efficiently1:
define a new array sums as follows:
sums[i] = arr[0] + arr[1] + ... + arr[i]
It is easy to calculate the above in O(N) time. Note that with the assumption of non negative numbers - this sums array is sorted.
Now, iterate the sums array, and do a binary search for each element sums[i], if there is some index j such that sums[j]-sums[i] == SUM. If the answer is true, add by one (more simple work is needed if array can contains zero, it does not affect complexity).
Since the search is binary search, and is done in O(logN) per iteration, and you do it for each element - you actually have O(NlogN) algorithm.
Similarly, but adding the elements in sums to a hash-set instead of placing them in a sorted array, you can reach O(N) average case performance, since seeking each element is now O(1) on average.
pseudo code:
input: arr , sum
output: numOccurances - number of contiguous subarrays that sums to sum
currSum = 0
S = new hash set (multiset actually)
for each element x in arr:
currSum += x
add x to S
numOccurances= 0
for each element x in S:
let k = number of occurances of sum-x in the hashset
numOccurances += k
return numOccurances
Note that the hash set variant does not need the restriction of non-negative numbers, and can handle it as well.
(1) Assuming your array contains only non negative numbers.
According to amit's algorithm :
def get_sum_slices3(a, sum)
s = a.inject([]) { |m, e| m << e + m.last.to_i }
s.sort!
s.count { |x| s.bsearch { |y| x - y == sum } }
end
Ruby uses quicksort which is nlogn in most cases
You should detail better what you're trying to achieve here. Anyway computing the number of subarray that have a specific sum could be done like this:
def get_sum_slices(a, sum)
count = 0
(2..a.length).each do |n|
a.combination(n).each do |combination|
if combination.inject(:+) == sum
count += 1
puts combination.inspect
end
end
end
count
end
btw your example should return 6
irb> get_sum_slices [-2, 0, 3, 2, -7, 4], 0
[-2, 2]
[-2, 0, 2]
[3, -7, 4]
[0, 3, -7, 4]
[-2, 3, 2, -7, 4]
[-2, 0, 3, 2, -7, 4]
=> 6

Finding minimum element to the right of an index in an array for all indices

Given an array, I wish to find the minimum element to the right of the current element at i where 0=<i<n and store the index of the corresponding minimum element in another array.
For example, I have an array A ={1,3,6,7,8}
The result array would contain R={1,2,3,4} .(R array stores indices to min element).
I could only think of an O(N^2) approach.. where for each element in A, I would traverse the remaining elements to right of A and find the minimum.
Is it possible to do this in O(N)? I want to use the solution to solve another problem.
You should be able to do this in O(n) by filling the array from the right hand side and maintaining the index of the current minimum, as per the following pseudo-code:
def genNewArray (oldArray):
newArray = new array[oldArray.size]
saveIndex = -1
for i = newArray.size - 1 down to 0:
newArray[i] = saveIndex
if saveIndex == -1 or oldArray[i] < oldArray[saveIndex]:
saveIndex = i
return newArray
This passes through the array once, giving you the O(n) time complexity. It can do this because, once you've found a minimum beyond element N, it will only change for element N-1 if element N is less than the current minimum.
The following Python code shows this in action:
def genNewArray (oldArray):
newArray = []
saveIndex = -1
for i in range (len (oldArray) - 1, -1, -1):
newArray.insert (0, saveIndex)
if saveIndex == -1 or oldArray[i] < oldArray[saveIndex]:
saveIndex = i
return newArray
oldList = [1,3,6,7,8,2,7,4]
x = genNewArray (oldList)
print "idx", [0,1,2,3,4,5,6,7]
print "old", oldList
print "new", x
The output of this is:
idx [0, 1, 2, 3, 4, 5, 6, 7]
old [1, 3, 6, 7, 8, 2, 7, 4]
new [5, 5, 5, 5, 5, 7, 7, -1]
and you can see that the indexes at each element of the new array (the second one) correctly point to the minimum value to the right of each element in the original (first one).
Note that I've taken one specific definition of "to the right of", meaning it doesn't include the current element. If your definition of "to the right of" includes the current element, just change the order of the insert and if statement within the loop so that the index is updated first:
idx [0, 1, 2, 3, 4, 5, 6, 7]
old [1, 3, 6, 7, 8, 2, 7, 4]
new [0, 5, 5, 5, 5, 5, 7, 7]
The code for that removes the check on saveIndex since you know that the minimum index for the last element can be found at the last element:
def genNewArray (oldArray):
newArray = []
saveIndex = len (oldArray) - 1
for i in range (len (oldArray) - 1, -1, -1):
if oldArray[i] < oldArray[saveIndex]:
saveIndex = i
newArray.insert (0, saveIndex)
return newArray
Looks like HW. Let f(i) denote the index of the minimum element to the right of the element at i. Now consider walking backwards (filling in f(n-1), then f(n-2), f(n-3), ..., f(3), f(2), f(1)) and think about how information of f(i) can give you information of f(i-1).

Rotating elements i to j in an array

Given an array:
a = [1, 2, 3, 4, 5, 6]
I want to rotate elements i through j in some direction n times. So, for example:
i = 2
j = 3
n = 1
Rotating a will produce:
new_a = [1, 2, 4, 3, 5, 6]
This is what I have:
def rotate_sub(a, i, j, n)
return a[0...i] + a[i..j].rotate(n) + a[j+1..-1]
end
Is there a better way to do this? Since there is no bound-checks, i or j could very well be outside the bounds of the array.
If you are willing to mutate the original array you could do something like:
a[i..j] = a[i..j].rotate n
But I like the functional solution you already have.
I don't think there's a magical way, so perhaps the simplest is the best:
def rotate_sub(a, i, j, n)
a[0...i] + a[i..j].rotate(n) + a[j+1..-1] if i < j && j < a.size
end

Resources