How is counting sort a stable sort? - algorithm

Suppose my input is (a,b and c to distinguish between equal keys)
1 6a 8 3 6b 0 6c 4
My counting sort will save as (discarding the a,b and c info!!)
0(1) 1(1) 3(1) 4(1) 6(3) 8(1)
which will give me the result
0 1 3 4 6 6 6 8
So, how is this stable sort?
I am not sure how it is "maintaining the relative order of records with equal keys."
Please explain.

To understand why counting sort is stable, you need to understand that counting sort can not only be used for sorting a list of integers, it can also be used for sorting a list of elements whose key is an integer, and these elements will be sorted by their keys while having additional information associated with each of them.
A counting sort example that sorts elements with additional information will help you to understand this. For instance, we want to sort three stocks by their prices:
[(GOOG 3), (CSCO 1), (MSFT 1)]
Here stock prices are integer keys, and stock names are their associated information.
Expected output for the sorting should be:
[(CSCO 1), (MSFT 1), (GOOG 3)]
(containing both stock price and its name,
and the CSCO stock should appear before MSFT so that it is a stable sort)
A counts array will be calculated for sorting this (let's say stock prices can only be 0 to 3):
counts array: [0, 2, 0, 1] (price "1" appear twice, and price "3" appear once)
If you are just sorting an integer array, you can go through the counts array and output "1" twice and "3" once and it is done, and the entire counts array will become an all-zero array after this.
But here we want to have stock names in sorting output as well. How can we obtain this additional information (it seems the counts array already discards this piece of information)? Well, the associated information is stored in the original unsorted array. In the unsorted array [(GOOG 3), (CSCO 1), (MSFT 1)], we have both the stock name and its price available. If we get to know which position (GOOG 3) should be in the final sorted array, we can copy this element to the sorted position in the sorted array.
To obtain the final position for each element in the sorted array, unlike sorting an integer array, you don't use the counts array directly to output the sorted elements. Instead, counting sort has an additional step which calculates the cumulative sum array from the counts array:
counts array: [0, 2, 2, 3] (i from 0 to 3: counts[i] = counts[i] + counts[i - 1])
This cumulative sum array tells us each value's position in the final sorted array currently. For example, counts[1]==2 means currently item with value 1 should be placed in the 2nd slot in the sorted array. Intuitively, because counts[i] is the cumulative sum from left, it shows how many smaller items are before the ith value, which tells you where the position should be for the ith value.
If a $1 price stock appears at the first time, it should be outputted to the second position of the sorted array and if a $3 price stock appears at the first time, it should be outputted to the third position of the sorted array. If a $1 stock appears and its element gets copied to the sorted array, we will decreased its count in the counts array.
counts array: [0, 1, 2, 3]
(so that the second appearance of $1 price stock's position will be 1)
So we can iterate the unsorted array from backwards (this is important to ensure the stableness), check its position in the sorted array according to the counts array, and copied it to the sorted array.
sorted array: [null, null, null]
counts array: [0, 2, 2, 3]
iterate stocks in unsorted stocks from backwards
1. the last stock (MSFT 1)
sorted array: [null, (MSFT 1), null] (copy to the second position because counts[1] == 2)
counts array: [0, 1, 2, 3] (decrease counts[1] by 1)
2. the middle stock (CSCO 1)
sorted array: [(CSCO 1), (MSFT 1), null] (copy to the first position because counts[1] == 1 now)
counts array: [0, 0, 2, 3] (decrease counts[1] by 1)
3. the first stock (GOOG 3)
sorted array: [(CSCO 1), (MSFT 1), (GOOG 3)] (copy to the third position because counts[3] == 3)
counts array: [0, 0, 2, 2] (decrease counts[3] by 1)
As you can see, after the array gets sorted, the counts array (which is [0, 0, 2, 2]) doesn't become an all-zero array like sorting an array of integers. The counts array is not used to tell how many times an integer appears in the unsorted array, instead, it is used to tell which position the element should be in the final sorted array. And since we decrease the count every time we output an element, we are essentially making the elements with same key's next appearance final position smaller. That's why we need to iterate the unsorted array from backwards to ensure its stableness.
Conclusion:
Since each element contains not only an integer as key, but also some additional information, even if their key is the same, you could tell each element is different by using the additional information, so you will be able to tell if it is a stable sorting algorithm (yes, it is a stable sorting algorithm if implemented appropriately).
References:
Some good materials explaining counting sort and its stableness:
http://www.algorithmist.com/index.php/Counting_sort (this article explains this question pretty well)
http://courses.csail.mit.edu/6.006/fall11/rec/rec07.pdf
http://rosettacode.org/wiki/Sorting_algorithms/Counting_sort (a list of counting sort implementations in different programming languages. If you compare them with the algorithm in wikipedia's entry below about counting sort, you will find most of which doesn't implement the exact counting sort correctly but implement only the integer sorting function and they don't have the additional step to calculate the cumulative sum array. But you could check out the implementation in 'Go' programming language in this link, which does provides two different implementations, one is used for sorting integers only and the other can be used for sorting elements containing additional information)
http://en.wikipedia.org/wiki/Counting_sort

Simple, really: instead of a simple counter for each 'bucket', it's a linked list.
That is, instead of
0(1) 1(1) 3(1) 4(1) 6(3) 8(1)
You get
0(.) 1(.) 3(.) 4(.) 6(a,b,c) 8(.)
(here I use . to denote some item in the bucket).
Then just dump them back into one sorted list:
0 1 3 4 6a 6b 6c 8
That is, when you find an item with key x, knowing that it may have other information that distinguishes it from other items with the same key, you don't just increment a counter for bucket x (which would discard all those extra information).
Instead, you have a linked list (or similarly ordered data structure with constant time amortized append) for each bucket, and you append that item to the end of the list for bucket x as you scan the input left to right.
So instead of using O(k) space for k counters, you have O(k) initially empty lists whose sum of lengths will be n at the end of the "counting" portion of the algorithm. This variant of counting sort will still be O(n + k) as before.

Your solution is not a full counting sort, and discards the associated values.
Here's the full counting sort algorithm.
After you calculated the histogram:
0(1) 1(1) 3(1) 4(1) 6(3) 8(1)
you have to calculate the accumulated sums - each cell will contain how many elements are less than or equal to that value:
0(1) 1(2) 3(3) 4(4) 6(7) 8(8)
Now you start from the end of your original list and go backwards.
Last element is 4. There are 4 elements less than or equal to 4. So 4 will go on the 4th position. You decrement the counter for 4.
0(1) 1(2) 3(3) 4(3) 6(7) 8(8)
The next element is 6c. There are 7 elements less than or equal to 6. So 6c will go to the 7th position. Again, you decrement the counter for 6.
0(1) 1(2) 3(3) 4(3) 6(6) 8(8)
^ next 6 will go now to 6th position
As you can see, this algorithm is a stable sort. The order for the elements with the same key will be kept.

If your three "6" values are distinguishable, then your counting sort is wrong (it discards information about the values, which a true sort doesn't do, because a true sort only re-orders the values).
If your three "6" values are not distinguishable, then the sort is stable, because you have three indistinguishable "6"s in the input, and three in the output. It's meaningless to talk about whether they have or have not been "re-ordered": they're identical.
The concept of non-stability only applies when the values have some associated information which does not participate in the order. For instance if you were sorting pointers to those integers, then you could "tell the difference" between the three 6s by looking at their different addresses. Then it would be meaningful to ask whether any particular sort was stable. A counting sort based on the integer values then would not be sorting the pointers. A counting sort based on the pointer values would not order them by integer value, rather by address.

Related

Finding the combination from an array of numbers that gives the required coefficient

Please recommend the optimal algorithm or solution for such a task:
There are several arrays with fractional numbers
a = [1.5, 2, 3, 4.5, 7, 10, ...(up to 100 numbers)]
b = [5, 6, 8, 14, ...]
c = [1, 2, 4, 6.25, 8.15 ...] (up to 7 arrays)
Arrays can be of arbitrary length and contain a different count of numbers.
It is required to select one number from each array in such a way that their product was into a given range.
For example data required product should be between 40 and 50.
Solution can be:
a[2] * b[2] * c[1] = 3 * 8 * 2 = 48
a[0] * b[3] * c[1] = 1.5 * 14 * 2 = 42
If there can be several solutions (different combinations), then how can you find them all in the optimal way?
This is doable, but barely. This will require combining pairs of things over and over again using a variety of strategies.
First of all if you have 2 arrays of no more than 100 things, you can create an array of all pairs, sorted by sum either ascending or descending, and it only has 10,000 things in it.
Next, we can use a heap to implement a priority queue.
With a priority queue, we can combine 2 ordered arrays of size at most 10,000 to stream out the sums in either ascending or descending order while not keeping track of more than 10,000 things. How? First we create a data structure like this:
Create priority queue
For every entry a of array A:
Put (a, B[0], 0) into our queue using the product as a priority
return a data structure which contains B and the priority queue
And now we can get values out like this:
If the priority queue is empty:
We're done
else:
Take the first element of the queue
if not at the end of B:
insert (a, b[next_index], next_index) into the queue
return that first element
And we can peek at them by just looking at the first element of the queue without touching the data structure.
This strategy can stream through 2 arrays of size 10,000 with total work just a few billion operations.
OK, so now we can arrange to always have 7 arrays. (Some may simply be a trivial [1].) We can start as follows with the brute force strategy.
Combine the first 2 ascending.
Combine the second 2 ascending.
Combine the third 2 descending.
Arrange the last descending.
Next we can use the priority queue merge strategy as follows:
Combine (first 2) with (second 2) ascending
Combine (third 2) with last descending
We just need the generators at the moment.
Now our strategy will look like this:
For each combination (in ascending order) from first 4:
For each combination that lands in window from last 3:
emit final combination
But how do we do the window? Well, as the combination from the first 4 goes up, the window that the last 3 has to fall in goes down. So adjusting the window looks like this:
while there is a next value and next value is large enough to fit in the window:
Extract next value
Add next value to end of window
while first value is too large for the window:
remove first value from the window
(Variable sized arrays, such as Python's List, can do both these operations in amortized O(1) each.)
So our actual way to finish is:
For each combination (in ascending order) from first 4:
adjust entries in window from last 3
For each in window from last 3:
emit final combination
This has a fixed overhead of a few billion operations plus O(number of answers) to actually emit the combinations. This includes a number of data structures with around 10k items, plus a window whose maximum size is 1 million items for a maximum memory usage of a few hundred MB.

Why it is able to split between 82 and 10?

If I have the array [9, 82, 10]. By using the merge sort, I should compare the left index to the right index and if l < r, I split it to two arrays, right? But in the video, it shows that it has been cut it to half between 82 and 10. But, 82 > 10. I am so confused.
I think you confuse this logic with maybe another algorithm (QuickSort maybe?), because the split happens irrespective of the values. Where the split happens is only influenced by the current length of the array. The idea is to split the array in half. In case the length of the array is odd, that is not exactly possible, so the middle element is then put in the left or right part (it does not matter).
In this case it could split the array in either [9] and [82, 10] or in [9, 82] and [10]. Apparently you have seen it happen like in the latter case.
Only after the split the actual values start to play a role. This is when the parts are merged back together. First the left and right part are sorted (recursively), and then the left and right part are merged.
During that merge, a value from the left part and a value from the right part are compared. Every time the smaller one is put in the result array, and the "pointer" of where it came from is moved one position ahead.
In short: merge sort has two phases: split and merge. Values are not compared during the split phase, only during the merge phase.
To further clarify, no sorting or merging takes place until recursion produces two sub-arrays of size 1, at which point merging begins, and then follows the stack path, merging in a depth first / left first pattern (the animation makes this appear to be in parallel).
The particular implementation used in the video works with first and last indexes. The middle index effectively is (first+last)/2, and since array[4] = 9, array[5] = 82, array[6] = 10, first = 4, last = 6, middle = (4+6)/2 = 5, so it splits the sub-array into array[4,5] and array[6,6]. Although this is common for quick sort, most merge sorts work with beginning and ending index, beginning index = first index, and ending index = 1 + last index. In this case, begin = 4, end = 7, middle = (4+7)/2 = 5, and the split would be array[4, 5) = array[4,4] and array[5,7} = array[5,6] (using ...} to indicate the ending index = 1 + last index).
It should also be noted that most library implementations of stable sorts are some variation of iterative bottom up merge sort, which skips the recursive process and instead considers an array of n elements as n sorted runs of size 1, and begins merging immediately, in breadth first (across the array) order, merging even and odd runs, which doubles the size of sorted runs until run size >= array size. Typical variations of merge sort are hybrids of insertion sort and merge sort, such as timsort.
https://en.wikipedia.org/wiki/Timsort

Compare rotated lists, containing duplicates [duplicate]

This question already has answers here:
How to check whether two lists are circularly identical in Python
(18 answers)
Closed 7 years ago.
I'm looking for an efficient way to compare lists of numbers to see if they match at any rotation (comparing 2 circular lists).
When the lists don't have duplicates, picking smallest/largest value and rotating both lists before comparisons works.
But when there may be many duplicate large values, this isn't so simple.
For example, lists [9, 2, 0, 0, 9] and [0, 0, 9, 9, 2] are matches,where [9, 0, 2, 0, 9] won't (since the order is different).
Heres an example of an in-efficient function which works.
def min_list_rotation(ls):
return min((ls[i:] + ls[:i] for i in range(len(ls))))
# example use
ls_a = [9, 2, 0, 0, 9]
ls_b = [0, 0, 9, 9, 2]
print(min_list_rotation(ls_a) == min_list_rotation(ls_b))
This can be improved on for efficiency...
check sorted lists match before running exhaustive tests.
only test rotations that start with the minimum value(skipping matching values after that)effectively finding the minimum value with the furthest & smallest number after it (continually - in the case there are multiple matching next-biggest values).
compare rotations without creating the new lists each time..
However its still not a very efficient method since it relies on checking many possibilities.
Is there a more efficient way to perform this comparison?
Related question:
Compare rotated lists in python
If you are looking for duplicates in a large number of lists, you could rotate each list to its lexicographically minimal string representation, then sort the list of lists or use a hash table to find duplicates. This canonicalisation step means that you don't need to compare every list with every other list. There are clever O(n) algorithms for finding the minimal rotation described at https://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation.
You almost have it.
You can do some kind of "normalization" or "canonicalisation" of a list independently of the others, then you only need to compare item by item (or if you want, put them in a map, in a set to eliminate duplicates, ..."
1 take the minimum item, which is not preceded by itself (in a circular way)
In you example 92009, you should take the first 0 (not the second one)
2 If you have always the same item (say 00000), you just keep that: 00000
3 If you have the same item several times, take the next item, which is minimal, and keep going until you find one unique path with minimums.
Example: 90148301562 => you have 0148.. and 0156.. => you take 0148
4 If you can not separate the different paths (= if you have equality at infinite), you have a repeating pattern: then, no matters: you take any of them.
Example: 014376501437650143765 : you have the same pattern 0143765...
It is like AAA, where A = 0143765
5 When you have your list in this form, it is easy to compare two of them.
How to do that efficiently:
Iterate on your list to get the minimums Mx (not preceded by itself). If you find several, keep all of them.
Then, iterate from each minimum Mx, take the next item, and keep the minimums. If you do an entire cycle, you have a repeating pattern.
Except the case of repeating pattern, this must be the minimal way.
Hope it helps.
I would do this in expected O(N) time using a polynomial hash function to compute the hash of list A, and every cyclic shift of list B. Where a shift of list B has the same hash as list A, I'd compare the actual elements to see if they are equal.
The reason this is fast is that with polynomial hash functions (which are extremely common!), you can calculate the hash of each cyclic shift from the previous one in constant time, so you can calculate hashes for all of the cyclic shifts in O(N) time.
It works like this:
Let's say B has N elements, then the the hash of B using prime P is:
Hb=0;
for (i=0; i<N ; i++)
{
Hb = Hb*P + B[i];
}
This is an optimized way to evaluate a polynomial in P, and is equivalent to:
Hb=0;
for (i=0; i<N ; i++)
{
Hb += B[i] * P^(N-1-i); //^ is exponentiation, not XOR
}
Notice how every B[i] is multiplied by P^(N-1-i). If we shift B to the left by 1, then every every B[i] will be multiplied by an extra P, except the first one. Since multiplication distributes over addition, we can multiply all the components at once just by multiplying the whole hash, and then fix up the factor for the first element.
The hash of the left shift of B is just
Hb1 = Hb*P + B[0]*(1-(P^N))
The second left shift:
Hb2 = Hb1*P + B[1]*(1-(P^N))
and so on...

Disperse Duplicates in an Array

Source : Google Interview Question
Write a routine to ensure that identical elements in the input are maximally spread in the output?
Basically, we need to place the same elements,in such a way , that the TOTAL spreading is as maximal as possible.
Example:
Input: {1,1,2,3,2,3}
Possible Output: {1,2,3,1,2,3}
Total dispersion = Difference between position of 1's + 2's + 3's = 4-1 + 5-2 + 6-3 = 9 .
I am NOT AT ALL sure, if there's an optimal polynomial time algorithm available for this.Also,no other detail is provided for the question other than this .
What i thought is,calculate the frequency of each element in the input,then arrange them in the output,each distinct element at a time,until all the frequencies are exhausted.
I am not sure of my approach .
Any approaches/ideas people .
I believe this simple algorithm would work:
count the number of occurrences of each distinct element.
make a new list
add one instance of all elements that occur more than once to the list (order within each group does not matter)
add one instance of all unique elements to the list
add one instance of all elements that occur more than once to the list
add one instance of all elements that occur more than twice to the list
add one instance of all elements that occur more than trice to the list
...
Now, this will intuitively not give a good spread:
for {1, 1, 1, 1, 2, 3, 4} ==> {1, 2, 3, 4, 1, 1, 1}
for {1, 1, 1, 2, 2, 2, 3, 4} ==> {1, 2, 3, 4, 1, 2, 1, 2}
However, i think this is the best spread you can get given the scoring function provided.
Since the dispersion score counts the sum of the distances instead of the squared sum of the distances, you can have several duplicates close together, as long as you have a large gap somewhere else to compensate.
for a sum-of-squared-distances score, the problem becomes harder.
Perhaps the interview question hinged on the candidate recognizing this weakness in the scoring function?
In perl
#a=(9,9,9,2,2,2,1,1,1);
then make a hash table of the counts of different numbers in the list, like a frequency table
map { $x{$_}++ } #a;
then repeatedly walk through all the keys found, with the keys in a known order and add the appropriate number of individual numbers to an output list until all the keys are exhausted
#r=();
$g=1;
while( $g == 1 ) {
$g=0;
for my $n (sort keys %x)
{
if ($x{$n}>1) {
push #r, $n;
$x{$n}--;
$g=1
}
}
}
I'm sure that this could be adapted to any programming language that supports hash tables
python code for algorithm suggested by Vorsprung and HugoRune:
from collections import Counter, defaultdict
def max_spread(data):
cnt = Counter()
for i in data: cnt[i] += 1
res, num = [], list(cnt)
while len(cnt) > 0:
for i in num:
if num[i] > 0:
res.append(i)
cnt[i] -= 1
if cnt[i] == 0: del cnt[i]
return res
def calc_spread(data):
d = defaultdict()
for i, v in enumerate(data):
d.setdefault(v, []).append(i)
return sum([max(x) - min(x) for _, x in d.items()])
HugoRune's answer takes some advantage of the unusual scoring function but we can actually do even better: suppose there are d distinct non-unique values, then the only thing that is required for a solution to be optimal is that the first d values in the output must consist of these in any order, and likewise the last d values in the output must consist of these values in any (i.e. possibly a different) order. (This implies that all unique numbers appear between the first and last instance of every non-unique number.)
The relative order of the first copies of non-unique numbers doesn't matter, and likewise nor does the relative order of their last copies. Suppose the values 1 and 2 both appear multiple times in the input, and that we have built a candidate solution obeying the condition I gave in the first paragraph that has the first copy of 1 at position i and the first copy of 2 at position j > i. Now suppose we swap these two elements. Element 1 has been pushed j - i positions to the right, so its score contribution will drop by j - i. But element 2 has been pushed j - i positions to the left, so its score contribution will increase by j - i. These cancel out, leaving the total score unchanged.
Now, any permutation of elements can be achieved by swapping elements in the following way: swap the element in position 1 with the element that should be at position 1, then do the same for position 2, and so on. After the ith step, the first i elements of the permutation are correct. We know that every swap leaves the scoring function unchanged, and a permutation is just a sequence of swaps, so every permutation also leaves the scoring function unchanged! This is true at for the d elements at both ends of the output array.
When 3 or more copies of a number exist, only the position of the first and last copy contribute to the distance for that number. It doesn't matter where the middle ones go. I'll call the elements between the 2 blocks of d elements at either end the "central" elements. They consist of the unique elements, as well as some number of copies of all those non-unique elements that appear at least 3 times. As before, it's easy to see that any permutation of these "central" elements corresponds to a sequence of swaps, and that any such swap will leave the overall score unchanged (in fact it's even simpler than before, since swapping two central elements does not even change the score contribution of either of these elements).
This leads to a simple O(nlog n) algorithm (or O(n) if you use bucket sort for the first step) to generate a solution array Y from a length-n input array X:
Sort the input array X.
Use a single pass through X to count the number of distinct non-unique elements. Call this d.
Set i, j and k to 0.
While i < n:
If X[i+1] == X[i], we have a non-unique element:
Set Y[j] = Y[n-j-1] = X[i].
Increment i twice, and increment j once.
While X[i] == X[i-1]:
Set Y[d+k] = X[i].
Increment i and k.
Otherwise we have a unique element:
Set Y[d+k] = X[i].
Increment i and k.

Calculate Median in An Array - Can someone tell me what is going on in this line of code?

This is a solution for calculating the median value in an array. I get the first three lines, duh ;), but the third line is where the magic is happening. Can someone explain how the 'sorted' variable is using and why it's next to brackets, and why the other variable 'len' is enclosed in those parentheses and then brackets? It's almost like sorted is all of a sudden being used as an array? Thanks!
def median(array)
sorted = array.sort
len = sorted.length
return ((sorted[(len - 1) / 2] + sorted[len / 2]) / 2.0).to_f
end
puts median([3,2,3,8,91])
puts median([2,8,3,11,-5])
puts median([4,3,8,11])
Consider this:
[1,2,2,3,4] and [1,2,3,4]. Both arrays are sorted, but have odd and even numbers of elements respectively. So, that piece of code is taking into account these 2 cases.
sorted is indeed an array. You sort [2,3,1,4] and you get back [1,2,3,4]. Then you calculate the middle index (len - 1) / 2 and len / 2 for even / odd number of elements, and find the average of them.
Yes, array.sort is returning an array and it is assigned to sorted. You can then access it via array indices.
If you have an odd number of elements, say 5 elements as in the example, the indices come out to be:
(len-1)/2=(5-1)/2=2
len/2=5/2=2 --- (remember this is integer division, so the decimal gets truncated)
So you take the value at index 2 and add them, and then divide by 2, which is the same as the value at index 2.
If you have an even number of elements, say 4,
(len-1)/2=(4-1)/2=1 --- (remember this is integer division, so the decimal gets truncated)
len/2=4/2=2
So in this case, you are effectively averaging the two middle elements 1 and 2, which is the definition of median for when you have an even number of elements.
It's almost like sorted is all of a sudden being used as an array?
Yes, it is. On line 2 it's being initialized as being an array with the same elements as the input, but in ascending order (default sort is ascending). On line 3 you have len which is initialized with the length of the sorted array, so yeah, sorted is being used as an array since then, because that's what it is.

Resources