Heap sort pseudo code algorithm - algorithm

In heap sort algorithm
n=m
for k:= m div 2 down to 0
downheap(k);
repeat
t:=a[0]
a[0]:=a[n-1]
a[n-1]:=t
n—
downheap(0);
until n <= 0
Can some one please explain to me what is done in lines
n=m
for k:= m div 2 down to 0
downheap(k);
I think that is the heap building process but what is mean by for k:= m div 2 down to 0
Also is n the number of items.So in an array representation last element is stored at a[n-1]?
But why do it for n> = 0. Can't we finish at n>0.Because the first element gets automatically sorted?

n=m
for k:= m div 2 down to 0
downheap(k);
In a binary heap, half of the nodes have no children. So you can build a heap by starting at the midpoint and sifting items down. What you're doing here is building the heap from the bottom up. Consider this array of five items:
[5, 3, 2, 4, 1]
Or, as a tree:
5
3 2
4 1
The length is 5, so we want to start at index 2 (assume a 1-based heap array). downheap, then, will look at the node labeled 3 and compare it with the smallest child. Since 1 is smaller than 3, we swap the items giving:
5
1 2
4 3
Since we reached a leaf level, we're done with that item. Move on to the first item, 5. It's smaller than 1, so we swap items:
1
5 2
4 3
But the item 5 is still larger than its children, so we do another swap:
1
3 2
4 5
And we're done. You have a valid heap.
It's instructive to do that by hand (with pencil and paper) to build a larger heap--say 10 items. That will give you a very good understanding of how the algorithm works.
For purposes of building the heap in this way, it doesn't matter if the array indexes start at 0 or 1. If the array is 0-based, then you end up making one extra call to downheap, but that doesn't do anything because the node you're trying to move down is already a leaf node. So it's slightly inefficient (one extra call to downheap), but not harmful.
It is important, however, that if your root node is at index 1, that you stop your loop with n > 0 rather than n >= 0. In the latter case, you could very well end up adding a bogus value to your heap and removing an item that's supposed to be there.

for k:= m div 2 down to 0
This appears to be pseudocode for:
for(int k = m/2; k >= 0; k--)
Or possibly
for(int k = m/2; k > 0; k--)
Depending on whether "down to 0" is inclusive or not.
Also is n the number of items?
Initially, yes, but it decrements on the line n-.
Can't we finish at n>0.Because the first element gets automatically sorted?
Yes, this is effectively what happens. Once N becomes zero at n-, it's most of the way through the loop body, so the only thing that gets executed after that and before until n <= 0 terminates is downheap(0);

Related

Data structure to handle numerous queries on large size array

Given q queries of the following form. A list is there.
1 x y: Add number x to the list y times.
2 n: find the nth number of the sorted list
constraints
1 <= q <= 5 * 100000
1 <= x, y <= 1000000000
1 <= n < length of list
sample.
input
4
1 3 6
1 5 2
2 7
2 4
output
5
3
This is a competitive programming problem that it's too early in the morning for me to solve right now, but I can try and give some pointers.
If you were to store the entire array explicitly, it would obviously blow out your memory. But you can exploit the structure of the array to instead store the number of times each entry appears in the array. So if you got the query
1 3 5
then instead of storing [3, 3, 3], you'd store the pair (3, 5), indicating that the number 3 is in the list 5 times.
You can pretty easily build this, perhaps as a vector of pairs of ints that you update.
The remaining task is to implement the 2 query, where you find an element by its index. A side effect of the structure we've chosen is that you can't directly index into that vector of pairs of ints, since the indices in that list don't match up with the indices into the hypothetical array. We could just add up the size of each entry in the vector from the start until we hit the index we want, but that's O(n^2) in the number of queries we've processed so far... likely too slow. Instead, we probably want some updatable data structure for prefix sums—perhaps as described in this answer.

Longest substring where every character appear even number of times (possibly zero)

Suppose we have a string s. We want to find the length of the longest substring of s such that every character in the substring appears even number of times (possible zero).
WC Time: O(nlgn). WC space: O(n)
First, it's obvious that the substring must be of an even length. Second, I'm familiar with the sliding window method where we anchor some right index and look for the left-most index to match your criterion. I tried to apply this idea here but couldn't really formulate it.
Also, it seems to me like a priority queue could come in handy (since the O(nlgn) requirement is sort of hinting it)
I'd be glad for help!
Let's define the following bitsets:
B[c,i] = 1 if character c appeared in s[0,...,i] even number of times.
Calculating B[c,i] takes linear time (for all values):
for all c:
B[c,-1] = 0
for i from 0 to len(arr):
B[c, i] = B[s[i], i-1] XOR 1
Since the alphabet is of constant size, so are the bitsets (for each i).
Note that the condition:
every character in the substring appears even number of times
is true for substring s[i,j] if and only if the bitset of index i is identical to the bitset of index j (otherwise, there is a bit that repeated odd number of times in this substring ; other direction: If there is a bit that repeated number of times, then its bit cannot be identical).
So, if we store all bitsets in some set (hash set/tree set), and keep only latest entry, this preprocessing takes O(n) or O(nlogn) time (depending on hash/tree set).
In a second iteration, for each index, find the farther away index with identical bitset (O(1)/O(logn), depending if hash/tree set), find the substring length, and mark it as candidate. At the end, take the longest candidate.
This solution is O(n) space for the bitsets, and O(n)/O(nlogn) time, depending if using hash/tree solution.
Pseudo code:
def NextBitset(B, c): # O(1) time
for each x in alphabet \ {c}:
B[x, i] = B[x, i-1]
B[c, i] = B[c, i-1] XOR 1
for each c in alphabet: # O(1) time
B[c,-1] = 0
map = new hash/tree map (bitset->int)
# first pass: # O(n)/O(nlogn) time
for i from 0 to len(s):
# Note, we override with the latest element.
B = NextBitset(B, s[i])
map[B] = i
for each c in alphabet: # O(1) time
B[c,-1] = 0
max_distance = 0
# second pass: O(n)/ O(nlogn) time.
for i from 0 to len(s):
B = NextBitset(B, s[i])
j = map.find(B) # O(1) / O(logn)
max_distance = max(max_distance, j-i)
I'm not sure exactly what amit proposes so if this is it, please consider it another explanation. This can be accomplished in a single traversal.
Produce a bitset of length equal to the alphabet's for each index of the string. Store the first index for each unique bitset encountered while traversing the string. Update the largest interval between a current and previously seen bitset.
For example, the string, "aabccab":
a a b c c a b
0 1 2 3 4 5 6 (index)
_
0 1 0 0 0 0 1 1 | (vertical)
0 0 0 1 1 1 1 0 | bitset for
0 0 0 0 1 0 0 0 _| each index
^ ^
|___________|
largest interval
between current and
previously seen bitset
The update for each iteration can be accomplished in O(1) by preprocessing a bit mask for each character to XOR with the previous bitset:
bitset mask
0 1 1
1 XOR 0 = 1
0 0 0
means update the character associated with the first bit in the alphabet-bitset.

Cycle sort Algorithm

I was browsing through the internet when i found out that there is an algorithm called cycle sort which makes the least number of memory writes.But i am not able to find the algorithm anywhere.How to detect whether a cycle is there or not in an array?
Can anybody give a complete explanation for this algorithm?
The cycle sort algorithm is motivated by something called a cycle decomposition. Cycle decompositions are best explained by example. Let's suppose that you have this array:
4 3 0 1 2
Let's imagine that we have this sequence in sorted order, as shown here:
0 1 2 3 4
How would we have to shuffle this sorted array to get to the shuffled version? Well, let's place them side-by-side:
0 1 2 3 4
4 3 0 1 2
Let's start from the beginning. Notice that the number 0 got swapped to the position initially held by 2. The number 2, in turn, got swapped to the position initially held by 4. Finally, 4 got swapped to the position initially held by 0. In other words, the elements 0, 2, and 4 all were cycled forward one position. That leaves behind the numbers 1 and 3. Notice that 1 swaps to where 3 is and 3 swaps to where 1 is. In other words, the elements 1 and 3 were cycled forward one position.
As a result of the above observations, we'd say that the sequence 4 3 0 1 2 has cycle decomposition (0 2 4)(1 3). Here, each group of terms in parentheses means "circularly cycle these elements forward." This means to cycle 0 to the spot where 2 is, 2 to the spot where 4 is, and 4 to the spot where 0 was, then to cycle 1 to the spot where 3 was and 3 to the spot where 1 is.
If you have the cycle decomposition for a particular array, you can get it back in sorted order making the fewest number of writes by just cycling everything backward one spot. The idea behind cycle sort is to try to determine what the cycle decomposition of the input array is, then to reverse it to put everything back in its place.
Part of the challenge of this is figuring out where everything initially belongs since a cycle decomposition assumes you know this. Typically, cycle sort works by going to each element and counting up how many elements are smaller than it. This is expensive - it contributes to the Θ(n2) runtime of the sorting algorithm - but doesn't require any writes.
here's a python implementation if anyone needs
def cycleSort(vector):
writes = 0
# Loop through the vector to find cycles to rotate.
for cycleStart, item in enumerate(vector):
# Find where to put the item.
pos = cycleStart
for item2 in vector[cycleStart + 1:]:
if item2 < item:
pos += 1
# If the item is already there, this is not a cycle.
if pos == cycleStart:
continue
# Otherwise, put the item there or right after any duplicates.
while item == vector[pos]:
pos += 1
vector[pos], item = item, vector[pos]
writes += 1
# Rotate the rest of the cycle.
while pos != cycleStart:
# Find where to put the item.
pos = cycleStart
for item2 in vector[cycleStart + 1:]:
if item2 < item:
pos += 1
# Put the item there or right after any duplicates.
while item == vector[pos]:
pos += 1
vector[pos], item = item, vector[pos]
writes += 1
return writes
x = [0, 1, 2, 2, 2, 2, 1, 9, 3.5, 5, 8, 4, 7, 0, 6]
w = cycleSort(x)
print w, x

Dynamic programming: can interval of even 1's and 0's be found in linear time?

Found the following inteview q on the web:
You have an array of
0s and 1s and you want to output all the intervals (i, j) where the
number of 0s and numbers of 1s are equal. Example
pos = 0 1 2 3 4 5 6 7 8
0 1 0 0 1 1 1 1 0
One interval is (0, 1) because there the number
of 0 and 1 are equal. There are many other intervals, find all of them
in linear time.
I think there is no linear time algo, as there may be n^2 such intervals.
Am I right? How can I prove that there are n^2 such ?
This is the fastest way I can think of to do this, and it is linear to the number of intervals there are.
Let L be your original list of numbers and A be a hash of empty arrays where initially A[0] = [0]
sum = 0
for i in 0..n
if L[i] == 0:
sum--
A[sum].push(i)
elif L[i] == 1:
sum++
A[sum].push(i)
Now A is essentially an x y graph of the sum of the sequence (x is the index of the list, y is the sum). Every time there are two x values x1 and x2 to an y value, you have an interval (x1, x2] where the number of 0s and 1s is equal.
There are m(m-1)/2 (arithmetic sum from 1 to m - 1) intervals where the sum is 0 in every array M in A where m = M.length
Using your example to calculate A by hand we use this chart
L # 0 1 0 1 0 0 1 1 1 1 0
A keys 0 -1 0 -1 0 -1 -2 -1 0 1 2 1
L index -1 0 1 2 3 4 5 6 7 8 9 10
(I've added a # to represent the start of the list with an key of -1. Also removed all the numbers that are not 0 or 1 since they're just distractions) A will look like this:
[-2]->[5]
[-1]->[0, 2, 4, 6]
[0]->[-1, 1, 3, 7]
[1]->[8, 10]
[2]->[9]
For any M = [a1, a2, a3, ...], (ai + 1, aj) where j > i will be an interval with the same number of 0s as 1s. For example, in [-1]->[0, 2, 4, 6], the intervals are (1, 2), (1, 4), (1, 6), (3, 4), (3, 6), (5, 6).
Building the array A is O(n), but printing these intervals from A must be done in linear time to the number of intervals. In fact, that could be your proof that it is not quite possible to do this in linear time to n because it's possible to have more intervals than n and you need at least the number of interval iterations to print them all.
Unless of course you consider building A is enough to find all the intervals (since it's obvious from A what the intervals are), then it is linear to n :P
A linear solution is possible (sorry, earlier I argued that this had to be n^2) if you're careful to not actually print the results!
First, let's define a "score" for any set of zeros and ones as the number of ones minus the number of zeroes. So (0,1) has a score of 0, while (0) is -1 and (1,1) is 2.
Now, start from the right. If the right-most digit is a 0 then it can be combined with any group to the left that has a score of 1. So we need to know what groups are available to the left, indexed by score. This suggests a recursive procedure that accumulates groups with scores. The sweep process is O(n) and at each step the process has to check whether it has created a new group and extend the table of known groups. Checking for a new group is constant time (lookup in a hash table). Extending the table of known groups is also constant time (at first I thought it wasn't, but you can maintain a separate offset that avoids updating each entry in the table).
So we have a peculiar situation: each step of the process identifies a set of results of size O(n), but the calculation necessary to do this is constant time (within that step). So the process itself is still O(n) (proportional to the number of steps). Of course, actually printing the results (either during the step, or at the end) makes things O(n^2).
I'll write some Python code to test/demonstrate.
Here we go:
SCORE = [-1,1]
class Accumulator:
def __init__(self):
self.offset = 0
self.groups_to_right = {} # map from score to start index
self.even_groups = []
self.index = 0
def append(self, digit):
score = SCORE[digit]
# want existing groups at -score, to sum to zero
# but there's an offset to correct for, so we really want
# groups at -(score+offset)
corrected = -(score + self.offset)
if corrected in self.groups_to_right:
# if this were a linked list we could save a reference
# to the current value. it's not, so we need to filter
# on printing (see below)
self.even_groups.append(
(self.index, self.groups_to_right[corrected]))
# this updates all the known groups
self.offset += score
# this adds the new one, which should be at the index so that
# index + offset = score (so index = score - offset)
groups = self.groups_to_right.get(score-self.offset, [])
groups.append(self.index)
self.groups_to_right[score-self.offset] = groups
# and move on
self.index += 1
#print self.offset
#print self.groups_to_right
#print self.even_groups
#print self.index
def dump(self):
# printing the results does take longer, of course...
for (end, starts) in self.even_groups:
for start in starts:
# this discards the extra points that were added
# to the data after we added it to the results
# (avoidable with linked lists)
if start < end:
print (start, end)
#staticmethod
def run(input):
accumulator = Accumulator()
print input
for digit in input:
accumulator.append(digit)
accumulator.dump()
print
Accumulator.run([0,1,0,0,1,1,1,1,0])
And the output:
dynamic: python dynamic.py
[0, 1, 0, 0, 1, 1, 1, 1, 0]
(0, 1)
(1, 2)
(1, 4)
(3, 4)
(0, 5)
(2, 5)
(7, 8)
You might be worried that some additional processing (the filtering for start < end) is done in the dump routine that displays the results. But that's because I am working around Python's lack of linked lists (I want to both extend a list and save the previous value in constant time).
It may seem surprising that the result is of size O(n^2) while the process of finding the results is O(n), but it's easy to see how that is possible: at one "step" the process identifies a number of groups (of size O(n)) by associating the current point (self.index in append, or end in dump()) with a list of end points (self.groups_to_right[...] or ends).
Update: One further point. The table of "groups to the right" will have a "typical width" of sqrt(n) entries (this follows from the central limit theorem - it's basically a random walk in 1D). Since an entry is added at each step, the average length is also sqrt(n) (the n values shared out over sqrt(n) bins). That means that the expected time for this algorithm (ie with random inputs), if you include printing the results, is O(n^3/2) even though worst case is O(n^2)
Answering directly the question:
you have to constructing an example where there are more than O(N) matches:
let N be in the form 2^k, with the following input:
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 (here, N=16)
number of matches (where 0 is the starting character):
length #
2 N/2
4 N/2 - 1
6 N/2 - 2
8 N/2 - 3
..
N 1
The total number of matches (starting with 0) is: (1+N/2) * (N/2) / 2 = N^2/8 + N/4
The matches starting with 1 are almost the same, expect that it is one less for each length.
Total: (N^2/8 + N/4) * 2 - N/2 = N^2/4
Every interval will contain at least one sequence of either (0,1) or (1,0). Therefore, it's simply a matter of finding every occurance of (0,1) or (1,0), then for each seeing if it is adjacent to an existing solution or if the two bookend elements form another solution.
With a bit of storage trickery you will be able to find all solutions in linear time. Enumerating them will be O(N^2), but you should be able to encode them in O(N) space.

Finding the minimum and maximm element from one of many arrays

I received a question during an Amazon interview and would like assistance with solving it.
Given N arrays of size K each, each of these K elements in the N arrays are sorted, and each of these N*K elements are unique. Choose a single element from each of the N arrays, from the chosen subset of N elements. Subtract the minimum and maximum element. This difference should be the least possible minimum.
Sample:
N=3, K=3
N=1 : 6, 16, 67
N=2 : 11,17,68
N=3 : 10, 15, 100
here if 16, 17, 15 are chosen, we get the minimum difference as
17-15=2.
I can think of O(N*K*N)(edited after correctly pointed out by zivo, not a good solution now :( ) solution.
1. Take N pointer initially pointing to initial element each of N arrays.
6, 16, 67
^
11,17,68
^
10, 15, 100
^
2. Find out the highest and lowest element among the current pointer O(k) (6 and 11) and find the difference between them.(5)
3. Increment the pointer which is pointing to lowest element by 1 in that array.
6, 16, 67
^
11,17,68
^
10, 15, 100 (difference:5)
^
4. Keep repeating step 2 and 3 and store the minimum difference.
6, 16, 67
^
11,17,68
^
10,15,100 (difference:5)
^
6, 16, 67
^
11,17,68
^
10,15,100 (difference:2)
^
Above will be the required solution.
6, 16, 67
^
11,17,68
^
10,15,100 (difference:84)
^
6, 16, 67
^
11,17,68
^
10,15,100 (difference:83)
^
And so on......
EDIT:
Its complexity can be reduced by using a heap (as suggested by Uri). I thought of it but faced a problem: Each time an element is extracted from heap, its array number has to be found out in order to increment the corresponding pointer for that array. An efficient way to find array number can definitely reduce the complexity to O(K*N log(K*N)). One naive way is to use a data structure like this
Struct
{
int element;
int arraynumer;
}
and reconstruct the initial data like
6|0,16|0,67|0
11|1,17|1,68|1
10|2,15|2,100|2
Initially keep the current max for first column and insert the pointed elements in heap. Now each time an element is extracted, its array number can be found out, pointer in that array is incremented , the newly pointed element can be compared to current max and max pointer can be adjusted accordingly.
So here is an algorithm to do solve this problem in two steps:
First step is to merge all your arrays into one sorted array which would look like this:
combined_val[] - which holds all numbers
combined_ind[] - which holds index of which array did this number originally belonged to
this step can be done easily in O(K*N*log(N)) but i think you can do better than that too (maybe not, you can lookup variants of merge sort because they do step similar to that)
Now second step:
it is easier to just put code instead of explaining so here is the pseduocode:
int count[N] = { 0 }
int head = 0;
int diffcnt = 0;
// mindiff is initialized to overall maximum value - overall minimum value
int mindiff = combined_val[N * K - 1] - combined_val[0];
for (int i = 0; i &lt N * K; i++)
{
count[combined_ind[i]]++;
if (count[combined_ind[i]] == 1) {
// diffcnt counts how many arrays have at least one element between
// indexes of "head" and "i". Once diffcnt reaches N it will stay N and
// not increase anymore
diffcnt++;
} else {
while (count[combined_ind[head]] > 1) {
// We try to move head index as forward as possible while keeping diffcnt constant.
// i.e. if count[combined_ind[head]] is 1, then if we would move head forward
// diffcnt would decrease, that is something we dont want to do.
count[combined_ind[head]]--;
head++;
}
}
if (diffcnt == N) {
// i.e. we got at least one element from all arrays
if (combined_val[i] - combined_val[head] &lt mindiff) {
mindiff = combined_val[i] - combined_val[head];
// if you want to save actual numbers too, you can save this (i.e. i and head
// and then extract data from that)
}
}
}
the result is in mindiff.
The runing time of second step is O(N * K). This is because "head" index will move only N*K times maximum. so the inner loop does not make this quadratic, it is still linear.
So total algorithm running time is O(N * K * log(N)), however this is because of merging step, if you can come up with better merging step you can probably bring it down to O(N * K).
This problem is for managers
You have 3 developers (N1), 3 testers (N2) and 3 DBAs (N3)
Choose the less divergent team that can run a project successfully.
int[n] result;// where result[i] keeps the element from bucket N_i
int[n] latest;//where latest[i] keeps the latest element visited from bucket N_i
Iterate elements in (N_1 + N_2 + N_3) in sorted order
{
Keep track of latest element visited from each bucket N_i by updating 'latest' array;
if boundary(latest) < boundary(result)
{
result = latest;
}
}
int boundary(int[] array)
{
return Max(array) - Min(array);
}
I've O(K*N*log(K)), with typical execution much less. Currently cannot think anything better. I'll explain first the easier to describe (somewhat longer execution):
For each element f in the first array (loop through K elements)
For each array, starting from the second array (loop through N-1 arrays)
Do a binary search on the array, and find element closest to f. This is your element (Log(K))
This algorithm can be optimized, if for each array, you add a new Floor Index. When performent the binary search, search between 'Floor' to 'K-1'.
Initially Floor index is 0, and for first element you search through the entire arrays. Once you find an element closest to 'f', update the Floor Index with the index of that element. Worse case is the same (Floor may not update, if maximum element of first array is smaller than any other minimum), but average case will improve.
Correctness proof for the accepted answer (Terminal's solution)
Assume that the algorithm finds a series A=<A[1],A[2],...,A[N]> which isn't the optimal solution (R).
Consider the index j in R, such that item R[j] is the first item among R that the algorithm examines and replaces it with the next item in its row.
Let A' denote the candidate solution at that phase (prior to the replacement). Since R[j]=A'[j] is the minimum value of A', it's also the minimum of R.
Now, consider the maximum value of R, R[m]. If A'[m]<R[m], then R can be improved by replacing R[m] with A'[m], which contradicts the fact that R is optimal. Therefore, A'[m]=R[m].
In other words, R and A' share the same maximum and minimum, therefore they are equivalent. This completes the proof: if R is an optimal solution, then the algorithm is guaranteed to find a solution as good as R.
for every element in 1st array
choose the element in 2nd array that is closest to the element in 1st array
current_array = 2;
do
{
choose the element in current_array+1 that is closest to the element in current_array
current_array++;
} while(current_array < n);
complexity: O(k^2*n)
Here is my logic on how to resolve this issue, keeping in mind that we need to pick one element from each of the N arrays (to compute the least minimum)
// if we take the above values as an example!
// then the idea would be to sort all three arrays while keeping another
// array to keep the reference to their sets (1 or 2 or 3, could be
// extended to n sets)
1 3 2 3 1 2 1 2 3 // this is the array that holds the set index
6 10 11 15 16 17 67 68 100 // this is the sorted combined array.
| |
5 2 33 // this is the computed least minimum,
// the rule is to make sure the indexes of the values
// we are comparing are different (to make sure we are
// comparing elements from different sets), then for example
// the first element of that example is index:1|value:6 we hold
// that value 6 (that is the value we will be using to compute the least minimum,
// then we go to the edge of the comparison which would be the second different index,
// we skip index:3|value:10 (we remove it from the array) we compare index:2|value:11
// to index:1|value:6 we obtain 5 which would go to a variable named leastMinimum = 5,
// now we remove the indexes and values we already used,
// and redo the same steps.
Step 1:
1 3 2 3 1 2 1 2 3
6 10 11 15 16 17 67 68 100
|
5
leastMinumum = 5
Step 2:
3 1 2 1 2 3
15 16 17 67 68 100
|
2
leastMinimum = min(2, leastMinumum) // which is equal 2
Step 3:
1 2 3
67 68 100
33
leastMinimum = min(33, leastMinumum) // which is equal to old leastMinumum which is 2
Now: We suppose we have elements from the same array that are very close to each other (k=2 this time which means we only have 3 sets with two values) :
// After sorting the n arrays we will have the below indexes array and values array
1 1 2 3 2 3
6 7 8 12 15 16
* * *
* we skip second index of 1|7 and we take the least minimum of 1|6 and 3|12 (index:2|value:8 will be removed as it is not at the edges, we pick the minimum and maximum of the unique index subset of n elements)
1 3
6 12
=6
* second step we remove the values we already used, so the array become like below:
1 2 3
7 15 16
* * *
7 - 16
= 9
Note:
Another approach that consumes more memory would consist of creating N sub-arrays from which we would be comparing the maximum - minumum
So from the below sorted values array and its corresponding indexes array we extract three other sub arrays:
1 3 2 3 1 2 1 2 3
6 10 11 15 16 17 67 68 100
First Array:
1 3 2
6 10 11
11-6 = 5
Second Array:
3 1 2
15 15 17
17-15 = 2
Third Array:
1 2 3
67 68 100
100 - 67 = 33

Resources