Cycle sort Algorithm - algorithm

I was browsing through the internet when i found out that there is an algorithm called cycle sort which makes the least number of memory writes.But i am not able to find the algorithm anywhere.How to detect whether a cycle is there or not in an array?
Can anybody give a complete explanation for this algorithm?

The cycle sort algorithm is motivated by something called a cycle decomposition. Cycle decompositions are best explained by example. Let's suppose that you have this array:
4 3 0 1 2
Let's imagine that we have this sequence in sorted order, as shown here:
0 1 2 3 4
How would we have to shuffle this sorted array to get to the shuffled version? Well, let's place them side-by-side:
0 1 2 3 4
4 3 0 1 2
Let's start from the beginning. Notice that the number 0 got swapped to the position initially held by 2. The number 2, in turn, got swapped to the position initially held by 4. Finally, 4 got swapped to the position initially held by 0. In other words, the elements 0, 2, and 4 all were cycled forward one position. That leaves behind the numbers 1 and 3. Notice that 1 swaps to where 3 is and 3 swaps to where 1 is. In other words, the elements 1 and 3 were cycled forward one position.
As a result of the above observations, we'd say that the sequence 4 3 0 1 2 has cycle decomposition (0 2 4)(1 3). Here, each group of terms in parentheses means "circularly cycle these elements forward." This means to cycle 0 to the spot where 2 is, 2 to the spot where 4 is, and 4 to the spot where 0 was, then to cycle 1 to the spot where 3 was and 3 to the spot where 1 is.
If you have the cycle decomposition for a particular array, you can get it back in sorted order making the fewest number of writes by just cycling everything backward one spot. The idea behind cycle sort is to try to determine what the cycle decomposition of the input array is, then to reverse it to put everything back in its place.
Part of the challenge of this is figuring out where everything initially belongs since a cycle decomposition assumes you know this. Typically, cycle sort works by going to each element and counting up how many elements are smaller than it. This is expensive - it contributes to the Θ(n2) runtime of the sorting algorithm - but doesn't require any writes.

here's a python implementation if anyone needs
def cycleSort(vector):
writes = 0
# Loop through the vector to find cycles to rotate.
for cycleStart, item in enumerate(vector):
# Find where to put the item.
pos = cycleStart
for item2 in vector[cycleStart + 1:]:
if item2 < item:
pos += 1
# If the item is already there, this is not a cycle.
if pos == cycleStart:
continue
# Otherwise, put the item there or right after any duplicates.
while item == vector[pos]:
pos += 1
vector[pos], item = item, vector[pos]
writes += 1
# Rotate the rest of the cycle.
while pos != cycleStart:
# Find where to put the item.
pos = cycleStart
for item2 in vector[cycleStart + 1:]:
if item2 < item:
pos += 1
# Put the item there or right after any duplicates.
while item == vector[pos]:
pos += 1
vector[pos], item = item, vector[pos]
writes += 1
return writes
x = [0, 1, 2, 2, 2, 2, 1, 9, 3.5, 5, 8, 4, 7, 0, 6]
w = cycleSort(x)
print w, x

Related

Counting Sort - Why go in reverse order during the insertion?

I was looking at the code for Counting Sort on GeeksForGeeks and during the final stage of the algorithm where the elements from the original array are inserted into their final locations in the sorted array (the second-to-last for loop), the input array is traversed in reverse order.
I can't seem to understand why you can't just go from the beginning of the input array to the end, like so :
for i in range(len(arr)):
output_arr[count_arr[arr[i] - min_element] - 1] = arr[i]
count_arr[arr[i] - min_element] -= 1
Is there some subtle reason for going in reverse order that I'm missing? Apologies if this is a very obvious question. I saw Counting Sort implemented in the same style here as well.
Any comments would be helpful, thank you!
Stability. With your way, the order of equal-valued elements gets reversed instead of preserved. Going over the input backwards cancels out the backwards copying (that -= 1 thing).
To process an array in forward order, the count / index array either needs to be one element larger so that the starting index is 0 or two local variables can be used. Example for integer array:
def countSort(arr):
output = [0 for i in range(len(arr))]
count = [0 for i in range(257)] # change
for i in arr:
count[i+1] += 1 # change
for i in range(256):
count[i+1] += count[i] # change
for i in range(len(arr)):
output[count[arr[i]]] = arr[i] # change
count[arr[i]] += 1 # change
return output
arr = [4,3,0,1,3,7,0,2,6,3,5]
ans = countSort(arr)
print(ans)
or using two variables, s to hold the running sum, c to hold the current count:
def countSort(arr):
output = [0 for i in range(len(arr))]
count = [0 for i in range(256)]
for i in arr:
count[i] += 1
s = 0
for i in range(256):
c = count[i]
count[i] = s
s = s + c
for i in range(len(arr)):
output[count[arr[i]]] = arr[i]
count[arr[i]] += 1
return output
arr = [4,3,0,1,3,7,0,2,6,3,5]
ans = countSort(arr)
print(ans)
Here We are Considering Stable Sort --> which is actually considering the Elements position by position.
For eg if we have array like
arr--> 5 ,8 ,3, 1, 1, 2, 6
0 1 2 3 4 5 6 7 8
count-> 0 2 1 1 0 1 1 0 1
Now we take cummulative sum of all frequencies
0 1 2 3 4 5 6 7 8
count-> 0 2 3 4 4 5 6 6 7
After Traversing the Original array , we prefer from last Since
we want to add Elements on their proper position so when we subtract the index , the Element will be added to lateral position.
But if we start traversing from beginning , then there will be no meaning for taking the cummulative sum since we are not adding according to the Elements placed. We are adding hap -hazardly which can be done even if we not take their cummulative sum.

How to perform range updates in sqrt{n} time?

I have an array and I have to perform query and updates on it.
For queries, I have to find frequency of a particular number in a range from l to r and for update, I have to add x from some range l to r.
How to perform this?
I thought of sqrt{n} optimization but I don't know how to perform range updates with this time complexity.
Edit - Since some people are asking for an example, here is one
Suppose the array is of size n = 8
and it is
1 3 3 4 5 1 2 3
And there are 3 queries to help everybody explain about what I am trying to say
Here they are
q 1 5 3 - This means that you have to find the frequency of 3 in range 1 to 5 which is 2 as 3 appears on 2nd and 3rd position.
second is update query and it goes like this - u 2 4 6 -> This means that you have to add 6 in the array from range 2 to 4. So the new array will become
1 9 9 10 5 1 2 3
And the last query is again the same as first one which will now return 0 as there is no 3 in the array from position 1 to 5 now.
I believe things must be more clear now. :)
I developed this algorithm long time (20+ years) ago for Arithmetic coder.
Both Update and Retrieve are performed in O(log(N)).
I named this algorithm "Method of Intervals". Let I show you the example.
Imagine, we have 8 intervals, with numbers 0-7:
+--0--+--1--+--2-+--3--+--4--+--5--+--6--+--7--+
Lets we create additional set of intervals, each spawns pair of original ones:
+----01-----+----23----+----45-----+----67-----+
Thereafter, we'll create the extra one layer of intervals, spawn pairs of 2nd:
+---------0123---------+---------4567----------+
And at last, we create single interval, covers all 8:
+------------------01234567--------------------+
As you see, in this structure, to retrieve right border of the interval [5], you needed just add together length of intervals [0123] + [45]. to retrieve left border of the interval [5], you needed sum of length the intervals [0123] + [4] (left border for 5 is right border for 4).
Of course, left border of the interval [0] is always = 0.
When you'll watch this proposed structure carefully, you will see, the odd elements in the each layers aren't needed. I say, you do not needed elements 1, 3, 5, 7, 23, 67, 4567, since these elements aren't used, during Retrieval or Update.
Lets we remove the odd elements and make following remuneration:
+--1--+--x--+--3-+--x--+--5--+--x--+--7--+--x--+
+-----2-----+-----x----+-----6-----+-----x-----+
+-----------4----------+-----------x-----------+
+----------------------8-----------------------+
As you see, with this remuneration, used the numbers [1-8]. Lets they will be array indexes. So, you see, there is used memory O(N).
To retrieve right border of the interval [7], you needed add length of the values with indexes 4,6,7. To update length of the interval [7], you needed add difference to all 3 of these values. As result, both Retrieval and Update are performed for Log(N) time.
Now is needed algorithm, how by the original interval number compute set of indexes in this data structure. For instance - how to convert:
1 -> 1
2 -> 2
3 -> 3,2
...
7 -> 7,6,4
This is easy, if we will see binary representation for these numbers:
1 -> 1
10 -> 10
11 -> 11,10
111 -> 111,110,100
As you see, in the each chain - next value is previous value, where rightmost "1" changed to "0". Using simple bit operation "x & (x - 1)", we can wtite a simple loop to iterate array indexes, related to the interval number:
int interval = 7;
do {
int index = interval;
do_something(index);
} while(interval &= interval - 1);

Heap sort pseudo code algorithm

In heap sort algorithm
n=m
for k:= m div 2 down to 0
downheap(k);
repeat
t:=a[0]
a[0]:=a[n-1]
a[n-1]:=t
n—
downheap(0);
until n <= 0
Can some one please explain to me what is done in lines
n=m
for k:= m div 2 down to 0
downheap(k);
I think that is the heap building process but what is mean by for k:= m div 2 down to 0
Also is n the number of items.So in an array representation last element is stored at a[n-1]?
But why do it for n> = 0. Can't we finish at n>0.Because the first element gets automatically sorted?
n=m
for k:= m div 2 down to 0
downheap(k);
In a binary heap, half of the nodes have no children. So you can build a heap by starting at the midpoint and sifting items down. What you're doing here is building the heap from the bottom up. Consider this array of five items:
[5, 3, 2, 4, 1]
Or, as a tree:
5
3 2
4 1
The length is 5, so we want to start at index 2 (assume a 1-based heap array). downheap, then, will look at the node labeled 3 and compare it with the smallest child. Since 1 is smaller than 3, we swap the items giving:
5
1 2
4 3
Since we reached a leaf level, we're done with that item. Move on to the first item, 5. It's smaller than 1, so we swap items:
1
5 2
4 3
But the item 5 is still larger than its children, so we do another swap:
1
3 2
4 5
And we're done. You have a valid heap.
It's instructive to do that by hand (with pencil and paper) to build a larger heap--say 10 items. That will give you a very good understanding of how the algorithm works.
For purposes of building the heap in this way, it doesn't matter if the array indexes start at 0 or 1. If the array is 0-based, then you end up making one extra call to downheap, but that doesn't do anything because the node you're trying to move down is already a leaf node. So it's slightly inefficient (one extra call to downheap), but not harmful.
It is important, however, that if your root node is at index 1, that you stop your loop with n > 0 rather than n >= 0. In the latter case, you could very well end up adding a bogus value to your heap and removing an item that's supposed to be there.
for k:= m div 2 down to 0
This appears to be pseudocode for:
for(int k = m/2; k >= 0; k--)
Or possibly
for(int k = m/2; k > 0; k--)
Depending on whether "down to 0" is inclusive or not.
Also is n the number of items?
Initially, yes, but it decrements on the line n-.
Can't we finish at n>0.Because the first element gets automatically sorted?
Yes, this is effectively what happens. Once N becomes zero at n-, it's most of the way through the loop body, so the only thing that gets executed after that and before until n <= 0 terminates is downheap(0);

Dynamic programming: can interval of even 1's and 0's be found in linear time?

Found the following inteview q on the web:
You have an array of
0s and 1s and you want to output all the intervals (i, j) where the
number of 0s and numbers of 1s are equal. Example
pos = 0 1 2 3 4 5 6 7 8
0 1 0 0 1 1 1 1 0
One interval is (0, 1) because there the number
of 0 and 1 are equal. There are many other intervals, find all of them
in linear time.
I think there is no linear time algo, as there may be n^2 such intervals.
Am I right? How can I prove that there are n^2 such ?
This is the fastest way I can think of to do this, and it is linear to the number of intervals there are.
Let L be your original list of numbers and A be a hash of empty arrays where initially A[0] = [0]
sum = 0
for i in 0..n
if L[i] == 0:
sum--
A[sum].push(i)
elif L[i] == 1:
sum++
A[sum].push(i)
Now A is essentially an x y graph of the sum of the sequence (x is the index of the list, y is the sum). Every time there are two x values x1 and x2 to an y value, you have an interval (x1, x2] where the number of 0s and 1s is equal.
There are m(m-1)/2 (arithmetic sum from 1 to m - 1) intervals where the sum is 0 in every array M in A where m = M.length
Using your example to calculate A by hand we use this chart
L # 0 1 0 1 0 0 1 1 1 1 0
A keys 0 -1 0 -1 0 -1 -2 -1 0 1 2 1
L index -1 0 1 2 3 4 5 6 7 8 9 10
(I've added a # to represent the start of the list with an key of -1. Also removed all the numbers that are not 0 or 1 since they're just distractions) A will look like this:
[-2]->[5]
[-1]->[0, 2, 4, 6]
[0]->[-1, 1, 3, 7]
[1]->[8, 10]
[2]->[9]
For any M = [a1, a2, a3, ...], (ai + 1, aj) where j > i will be an interval with the same number of 0s as 1s. For example, in [-1]->[0, 2, 4, 6], the intervals are (1, 2), (1, 4), (1, 6), (3, 4), (3, 6), (5, 6).
Building the array A is O(n), but printing these intervals from A must be done in linear time to the number of intervals. In fact, that could be your proof that it is not quite possible to do this in linear time to n because it's possible to have more intervals than n and you need at least the number of interval iterations to print them all.
Unless of course you consider building A is enough to find all the intervals (since it's obvious from A what the intervals are), then it is linear to n :P
A linear solution is possible (sorry, earlier I argued that this had to be n^2) if you're careful to not actually print the results!
First, let's define a "score" for any set of zeros and ones as the number of ones minus the number of zeroes. So (0,1) has a score of 0, while (0) is -1 and (1,1) is 2.
Now, start from the right. If the right-most digit is a 0 then it can be combined with any group to the left that has a score of 1. So we need to know what groups are available to the left, indexed by score. This suggests a recursive procedure that accumulates groups with scores. The sweep process is O(n) and at each step the process has to check whether it has created a new group and extend the table of known groups. Checking for a new group is constant time (lookup in a hash table). Extending the table of known groups is also constant time (at first I thought it wasn't, but you can maintain a separate offset that avoids updating each entry in the table).
So we have a peculiar situation: each step of the process identifies a set of results of size O(n), but the calculation necessary to do this is constant time (within that step). So the process itself is still O(n) (proportional to the number of steps). Of course, actually printing the results (either during the step, or at the end) makes things O(n^2).
I'll write some Python code to test/demonstrate.
Here we go:
SCORE = [-1,1]
class Accumulator:
def __init__(self):
self.offset = 0
self.groups_to_right = {} # map from score to start index
self.even_groups = []
self.index = 0
def append(self, digit):
score = SCORE[digit]
# want existing groups at -score, to sum to zero
# but there's an offset to correct for, so we really want
# groups at -(score+offset)
corrected = -(score + self.offset)
if corrected in self.groups_to_right:
# if this were a linked list we could save a reference
# to the current value. it's not, so we need to filter
# on printing (see below)
self.even_groups.append(
(self.index, self.groups_to_right[corrected]))
# this updates all the known groups
self.offset += score
# this adds the new one, which should be at the index so that
# index + offset = score (so index = score - offset)
groups = self.groups_to_right.get(score-self.offset, [])
groups.append(self.index)
self.groups_to_right[score-self.offset] = groups
# and move on
self.index += 1
#print self.offset
#print self.groups_to_right
#print self.even_groups
#print self.index
def dump(self):
# printing the results does take longer, of course...
for (end, starts) in self.even_groups:
for start in starts:
# this discards the extra points that were added
# to the data after we added it to the results
# (avoidable with linked lists)
if start < end:
print (start, end)
#staticmethod
def run(input):
accumulator = Accumulator()
print input
for digit in input:
accumulator.append(digit)
accumulator.dump()
print
Accumulator.run([0,1,0,0,1,1,1,1,0])
And the output:
dynamic: python dynamic.py
[0, 1, 0, 0, 1, 1, 1, 1, 0]
(0, 1)
(1, 2)
(1, 4)
(3, 4)
(0, 5)
(2, 5)
(7, 8)
You might be worried that some additional processing (the filtering for start < end) is done in the dump routine that displays the results. But that's because I am working around Python's lack of linked lists (I want to both extend a list and save the previous value in constant time).
It may seem surprising that the result is of size O(n^2) while the process of finding the results is O(n), but it's easy to see how that is possible: at one "step" the process identifies a number of groups (of size O(n)) by associating the current point (self.index in append, or end in dump()) with a list of end points (self.groups_to_right[...] or ends).
Update: One further point. The table of "groups to the right" will have a "typical width" of sqrt(n) entries (this follows from the central limit theorem - it's basically a random walk in 1D). Since an entry is added at each step, the average length is also sqrt(n) (the n values shared out over sqrt(n) bins). That means that the expected time for this algorithm (ie with random inputs), if you include printing the results, is O(n^3/2) even though worst case is O(n^2)
Answering directly the question:
you have to constructing an example where there are more than O(N) matches:
let N be in the form 2^k, with the following input:
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 (here, N=16)
number of matches (where 0 is the starting character):
length #
2 N/2
4 N/2 - 1
6 N/2 - 2
8 N/2 - 3
..
N 1
The total number of matches (starting with 0) is: (1+N/2) * (N/2) / 2 = N^2/8 + N/4
The matches starting with 1 are almost the same, expect that it is one less for each length.
Total: (N^2/8 + N/4) * 2 - N/2 = N^2/4
Every interval will contain at least one sequence of either (0,1) or (1,0). Therefore, it's simply a matter of finding every occurance of (0,1) or (1,0), then for each seeing if it is adjacent to an existing solution or if the two bookend elements form another solution.
With a bit of storage trickery you will be able to find all solutions in linear time. Enumerating them will be O(N^2), but you should be able to encode them in O(N) space.

Permutation of a vector

suppose I have a vector:
0 1 2 3 4 5
[45,89,22,31,23,76]
And a permutation of its indices:
[5,3,2,1,0,4]
Is there an efficient way to resort it according to the permutation thus obtaining:
[76,31,22,89,45,23]
Using at most O(1) additional space?
Yes. Starting from the leftmost position, we put the element there in its correct position i by swapping it with the (other) misplaced element at that position i. This is where we need the O(1) additional space. We keep swapping pairs of elements around until the element in this position is correct. Only then do we proceed to the next position and do the same thing.
Example:
[5 3 2 1 0 4] initial state
[4 3 2 1 0 5] swapped (5,4), 5 is now in the correct position, but 4 is still wrong
[0 3 2 1 4 5] swapped (4,0), now both 4 and 0 are in the correct positions, move on to next position
[0 1 2 3 4 5] swapped (3,1), now 1 and 3 are both in the correct positions, move on to next position
[0 1 2 3 4 5] all elements are in the correct positions, end.
Note:
Since each swap operation puts at least one (of the two) elements in the correct position, we need no more than N such swaps altogether.
Zach's solution is very good.
Still, I was wondering why there is any need to sort. If you have the permutation of the indices, use the values as a pointer to the old array.
This may eliminate the need to sort the array in the first place. This is not a solution that can be used in all cases, but it will work fine in most cases.
For example:
a = [45,89,22,31,23,76];
b = [5,3,2,1,0,4]
Now if you want to lop through the values in a, you can do something like (pseudo-code):
for i=0 to 4
{
process(a[i]);
}
If you want to loop through the values in the new order, do:
for i=0 to 4
{
process(a[b[i]]);
}
As mentioned earlier, this soluion may be sufficient in many cases, but may not in some other cases. For other cases you can use the solution by Zach.But for the cases where this solution can be used, it is better because no sorting is needed at all.

Resources