Sorting a permutation with minimum cost - algorithm

I am given a permutation of elements {1, 2, 3, ..., N} and I have to sort it using a swap operation. An operation which swaps elements x, y has cost min(x,y).
I need to find out the minimum cost of sorting the permutation. I tought about a greedy going from N to 1 and putting each element on it's position using a swap operation, but this is not a good idea.

Would this be optimal:
Find element 2
If it is not at correct place already
Find element at position 2
If swapping that with 2 puts both to right place
Swap them
Cost = Cost + min(2, other swapped element)
repeat
Find element 1
If element 1 is at position 1
Find first element that is in wrong place
If no element found
set sorted true
else
Swap found element with element 1
Cost = Cost + 1
else
Find element that should go to the position where 1 is
Swap found element with element 1
Cost = Cost + 1
until sorted is true

If seeks are trivial, then the minimum number of swaps will be determined by the number of cycles. It would follow a principle similar to Cuckoo Hashing. You take the first value in the permutation, and look at the value at the index of the value at the original index. If those match, then swap for a single operation.
[3 2 1] : Value 3 is at index one, so look at the value at index 3.
[3 2 1] : Value 1 is at index 3, so a two index cycle exists. Swap these values.
If not, push the first index onto a stack and seek the index for the value of the second index. There will eventually be a cycle. At that point, start swapping by popping values off the stack. This will take a number of swaps equal to n-1, where n is the length of the cycle.
[3 1 2] : Value 3 is at index one, so look at the value at index 3.
[3 1 2] : Value 2 is at index 3, so add 3 to the stack and seek to index 2. Also store 3 as the beginning value of the cycle.
[3 1 2] : Value 1 is at index 2, so add 2 to the stack and seek to index 1.
[3 1 2] : Value 3 is the beginning of the cycle, so swap pop 2 off the stack and swap values 1 and 2.
[1 3 2] : Pop 3 off the stack and swap 2 and 3, resulting in a sorted list with 2 swaps.
[1 2 3]
With this algorithm, the maximum number of swaps will be N-1, where N is the total number of values. This occurs when there is an N length cycle.
EDIT : This algorithm gives the minimum number of swaps, but not necessarily the minimum value using the min(x, y) function. I haven't done the math, but I believe that the only time when swap(x, y) = {swap(1, x), swap(1, y), swap(1, x)} shouldn't be used is when x in {2,3} and n < 2; Should be easy enough to write that as a special case. It may be better to check and place 2 and 3 explicitly, then follow the algorithm mentioned in the comments to achieve sorting in two operations.
EDIT 2 : Pretty sure this will catch all cases.
while ( unsorted ) {
while ( 1 != index(1) )
swap (1 , index (1) )
if (index(2) == value#(2))
swap (2, value#(2) )
else
swap (1 , highest value out of place)
}

If you have permutation of the numbers 1, 2, ..., N, then the sorted collection will be precisely 1, 2, ..., N. So you know the answer with complexity O(0) (i.e. you don't need an algorithm at all).
If you actually want to sort the range by repeated swapping, you can repeatedly "advance and cycle": Advance over the already sorted range (where a[i] == i), and then swap a[i] with a[a[i]] until you complete the cycle. Repeat until you reach the end. That needs at most N − 1 swaps, and it basically performs a cycle decomposition of the permutation.

Hmm. An interesting question. A quick algorithm that came up to my mind is to use elements as indices. We first find the index of an element that has 1 as value, and swap it with element of that number. Eventually this will end up with 1 appearing at first position, this means you have to swap 1 with some element that isn't yet in the position, and continue. This tops at 2*N-2, and has lower limit at N-1 for permutation (2,3,...,N,1), but the exact cost will vary.
Okay, given above algorithm and examples, I think the most optimal will be to follow exchanging 1 with anything until it first hits first place, then exchange 2 with second-place if it's not in place already, then continue swapping 1 with anything not yet in place, until sorted.
set sorted=false
while (!sorted) {
if (element 1 is in place) {
if (element 2 is in place) {
find any element NOT in place
if (no element found) sorted=true
else {
swap 1 with element found
cost++
}
} else {
swap 2 with element at second place
cost+=2
}
} else {
find element with number equals to position of element 1
swap 1 with element found
cost++
}
}

Use a bucket sort with bucket size of 1.
The cost is zero, since no swaps occur.
Now make a pass through the bucket array, and swap each value back to it's corresponding position in the original array.
That is N swaps.
The sum of N is N(N+1)/2 giving you an exact fixed cost.
A different interpretation is that you just store from the bucket array, back into the original array. That is no swaps, hence the cost is zero, which is a reasonable minimum.

Related

Count unique values in subarrays

Array a*b is given, which contains numbers up to 1e5, and we have to sum the count of unique numbers in every k*k subarray,
there are a-k*b-k subarrays
e.g
1 2 3 4
3 2 4 1
for k=2
subarrays are
{1,2,3,2}(3distinct values)
{2,3,2,4}(3distinct values)
{3,4,4,1}(3distinct values)
output is 9
Is there a faster approach than using a table which stores count of every number ocurrencies in an actaully processed k*k subarray(e.g at index 3 we store count of 3's in a subarray), moving a k*k window by 1 and adding values from right and removing from left, if after incremention value is 1 - increment unique numbers counter; if after decrementation value is 0 - decrement unique numbers counter.
After getting to the end of the row, go 1 down and move in the opposite direction.
Not worried about memory usage,im just looking for a way to do this faster
a == b is an equivalence relation.
Given A the set of elements (your subarray), you can find the equivalence classes of the relation with the method you found:
For each element x in the subarray A you take c[x] which is an int (c array elements all initialized to 0). If this c[x] == 0 then you have a new unique element so c[x]++. Otherwise you increment c[x].
This algorithm is linear to the number of elements in the subarray (obviously you iterate this process for each subarray and sum the results to get what you want).
But time complexity can’t be lower, cause you need to check each element anyway.

Find the number of elements greater than x in a given range

Given an array with n elements, how to find the number of elements greater than or equal to a given value (x) in the given range index i to index j in O(log n) complexity?
The queries are of the form (i, j, x) which means find number of elements greater than x from the ith till jth element in the array
The array is not sorted. i, j & x are different for different queries. Elements of the array are static.
Edit: i, j, x all can be different for different queries!
If we know all queries before hand, we can solve this problem by making use of Fenwick tree.
First, we need to sort all elements in array and queries together, based on their values.
So, assuming that we have array [5, 4, 2, 1, 3] and queries (0, 1, 6) and (2, 5, 2), we will have following result after sorting : [1, 2, 2, 3 , 4 , 5, 6]
Now, we will need to process each element in descending order:
If we encounter an element which is from the array, we will update its index in the Fenwick tree, which take O(log n)
If we encounter a queries, we need to check, in this range of the query, how many elements have been added in the tree, which take O(log n).
For above example, the process will be:
1st element is a query for value 6, as Fenwick tree is empty -> result is 0
2nd is element 5 -> add index 0 into Fenwick tree
3rd element is 4 -> add index 1 into tree.
4th element is 3 -> add index 4 into tree.
5th element is 2 -> add index 2 into tree.
6th element is query for range (2, 5), we query the tree and get answer 2.
7th element is 1 -> add index 3 into tree.
Finish.
So, in total, the time complexity for our solution is O((m + n) log(m + n)) with m and n is the number of queries and number of element from input array respectively.
That is possible only if you got the array sorted. In that case binary search the smallest value passing your condition and compute the count simply by sub-dividing your index range by its found position to two intervals. Then just compute the length of the interval passing your condition.
If array is not sorted and you need to preserve its order you can use index sort . When put together:
definitions
Let <i0,i1> be your used index range and x be your value.
index sort array part <i0,i1>
so create array of size m=i1-i0+1 and index sort it. This task is O(m.log(m)) where m<=n.
binary search x position in index array
This task is O(log(m)) and you want the index j = <0,m) for which array[index[j]]<=x is the smallest value <=x
compute count
Simply count how many indexes are after j up to m
count = m-j;
As you can see if array is sorted you got O(log(m)) complexity but if it is not then you need to sort O(m.log(m)) which is worse than naive approach O(m) which should be used only if the array is changing often and cant be sorted directly.
[Edit1] What I mean by Index sort
By index sort I mean this: Let have array a
a[] = { 4,6,2,9,6,3,5,1 }
The index sort means that you create new array ix of indexes in sorted order so for example ascending index sort means:
a[ix[i]]<=a[ix[i+1]]
In our example index bubble sort is is like this:
// init indexes
a[ix[i]]= { 4,6,2,9,6,3,5,1 }
ix[] = { 0,1,2,3,4,5,6,7 }
// bubble sort 1st iteration
a[ix[i]]= { 4,2,6,6,3,5,1,9 }
ix[] = { 0,2,1,4,5,6,7,3 }
// bubble sort 2nd iteration
a[ix[i]]= { 2,4,6,3,5,1,6,9 }
ix[] = { 2,0,1,5,6,7,4,3 }
// bubble sort 3th iteration
a[ix[i]]= { 2,4,3,5,1,6,6,9 }
ix[] = { 2,0,5,6,7,1,4,3 }
// bubble sort 4th iteration
a[ix[i]]= { 2,3,4,1,5,6,6,9 }
ix[] = { 2,5,0,7,6,1,4,3 }
// bubble sort 5th iteration
a[ix[i]]= { 2,3,1,4,5,6,6,9 }
ix[] = { 2,5,7,0,6,1,4,3 }
// bubble sort 6th iteration
a[ix[i]]= { 2,1,3,4,5,6,6,9 }
ix[] = { 2,7,5,0,6,1,4,3 }
// bubble sort 7th iteration
a[ix[i]]= { 1,2,3,4,5,6,6,9 }
ix[] = { 7,2,5,0,6,1,4,3 }
So the result of ascending index sort is this:
// ix: 0 1 2 3 4 5 6 7
a[] = { 4,6,2,9,6,3,5,1 }
ix[] = { 7,2,5,0,6,1,4,3 }
Original array stays unchanged only the index array is changed. Items a[ix[i]] where i=0,1,2,3... are sorted ascending.
So now if x=4 on this interval you need to find (bin search) which i has the smallest but still a[ix[i]]>=x so:
// ix: 0 1 2 3 4 5 6 7
a[] = { 4,6,2,9,6,3,5,1 }
ix[] = { 7,2,5,0,6,1,4,3 }
a[ix[i]]= { 1,2,3,4,5,6,6,9 }
// *
i = 3; m=8; count = m-i = 8-3 = 5;
So the answer is 5 items are >=4
[Edit2] Just to be sure you know what binary search means for this
i=0; // init value marked by `*`
j=4; // max power of 2 < m , i+j is marked by `^`
// ix: 0 1 2 3 4 5 6 7 i j i+j a[ix[i+j]]
a[ix[i]]= { 1,2,3,4,5,6,6,9 } 0 4 4 5>=4 j>>=1;
* ^
a[ix[i]]= { 1,2,3,4,5,6,6,9 } 0 2 2 3< 4 -> i+=j; j>>=1;
* ^
a[ix[i]]= { 1,2,3,4,5,6,6,9 } 2 1 3 4>=4 j>>=1;
* ^
a[ix[i]]= { 1,2,3,4,5,6,6,9 } 2 0 -> stop
*
a[ix[i]] < x -> a[ix[i+1]] >= x -> i = 2+1 = 3 in O(log(m))
so you need index i and binary bit mask j (powers of 2). At first set i with zero and j with the biggest power of 2 still smaller then n (or in this case m). Fro example something like this:
i=0; for (j=1;j<=m;j<<=1;); j>>=1;
Now in each iteration test if a[ix[i+j]] suffice search condition or not. If yes then update i+=j else leave it as is. After that go to next bit so j>>=1 and if j==0 stop else do iteration again. at the end you found value is a[ix[i]] and index is i in log2(m) iterations which is also the number of bits needed to represent m-1.
In the example above I use condition a[ix[i]]<4 so the found value was biggest number still <4 in the array. as we needed to also include 4 then I just increment the index once at the end (I could use <=4instead but was too lazy to rewrite the whole thing again).
The count of such items is then just number of element in array (or interval) minus the i.
Previous answer describes an offline solution using Fenwick tree, but this problem could be solved online (and even when doing updates to the array) with slightly worse complexity. I'll describe such a solution using segment tree and AVL tree (any self-balancing BST could do the trick).
First lets see how to solve this problem using segment tree. We'll do this by keeping the actual elements of the array in every node by range that it covers. So for array A = [9, 4, 5, 6, 1, 3, 2, 8] we'll have:
[9 4 5 6 1 3 2 8] Node 1
[9 4 5 6] [1 3 2 8] Node 2-3
[9 4] [5 6] [1 3] [2 8] Node 4-7
[9] [4] [5] [6] [1] [3] [2] [8] Node 8-15
Since height of our segment tree is log(n) and at every level we keep n elements, total amount of memory used is n log(n).
Next step is to sort these arrays which looks like this:
[1 2 3 4 5 6 8 9] Node 1
[4 5 6 9] [1 2 3 8] Node 2-3
[4 9] [5 6] [1 3] [2 8] Node 4-7
[9] [4] [5] [6] [1] [3] [2] [8] Node 8-15
NOTE: You first need to build the tree and then sort it to keep the order of elements in original array.
Now we can start our range queries and that works basically the same way as in regular segment tree, except when we find a completely overlapping interval, we then additionally check for number of elements greater than X. This can be done with binary search in log(n) time by finding the index of first element greater than X and subtracting it from number of elements in that interval.
Let's say our query was (0, 5, 4), so we do a segment search on interval [0, 5] and end up with arrays: [4, 5, 6, 9], [1, 3]. We then do a binary search on these arrays to see number of elements greater than 4 and get 3 (from first array) and 0 (from second) which brings to total of 3 - our query answer.
Interval search in segment trees can have up to log(n) paths, which means log(n) arrays and since we're doing binary search on each of them, brings complexity to log^2(n) per query.
Now if we wanted to update the array, since we are using segment trees its impossible to add/remove elements efficiently, but we can replace them. Using AVL trees (or other binary trees that allow replacement and lookup in log(n) time) as nodes and storing the arrays, we can manage this operation in same time complexity (replacement with log(n) time).
This is special variant of orthogonal range counting queries in 2D.
Each element el[i] is transformed into point on the plane (i, el[i])
and the query (i,j,x) can be transformed to count all points in the rectangle [i,j] x [x, +infty].
You can use 2D Range Trees (for example: http://www.cs.uu.nl/docs/vakken/ga/slides5b.pdf) for such type of the queries.
The simple idea is to have a tree that stores points in the leaves
(each leaf contains single point) ordered by X-axis.
Each internal node of the tree contains additional tree that stores all points from the subtree (ordered by Y-axis).
The used space is O(n logn)
Simple version could do the counting in O(log^2 n) time, but using
fractional cascading
this could be reduced to O(log n).
There better solution by Chazelle in 1988 (https://www.cs.princeton.edu/~chazelle/pubs/FunctionalDataStructures.pdf)
to O(n) preprocessing and O(log n) query time.
You can find some solutions with better query time, but they are way more complicated.
I would try to give you a simple approach.
You must have studied merge sort.
In merge sort we keep on dividing array into sub array and then build it up back but we dont store the sorted subarrays in this approach we store them as nodes of binary tree.
this takes up nlogn space and nlogn time to build up;
now for each query you just have to find the subarray this will be done in logn on average and logn^2 in worst case.
These tree are also known as fenwick trees.
If you want a simple code I can provide you with that.

Find the longest sequence length whose sum is divisible by 3

I have an exercise that needs to be done with O(n) time complexity, however, I can only solve it with an O(n^2) solution.
You have an array and you need to count the longest contiguous sequence such that it's sum can be divided to 3 without any remainder. For example for array {1,2,3,-4,-1), the function will return 4 because the longest sequence that its sum(0) can be divided to 3 is {2,3,-4,-1}.
My solution O(n^2) is based on arithmetic progression. Is there any way to do it with O(n) complexity?
Please, I only want a clue or a theoretical explanation. Please don't write the full solution :)
Let's take a look at prefix sums. A [L, R] subarray is divisble by 3 if and only if prefixSum[L - 1] mod 3 = prefixSum[R] mod 3. This observation gives a very simple linear solution(because there are only 3 possible values of a prefix sum mod 3, we can simply find the first and the last one).
For example, if the input array is {1, 2, 3, -4, -1}, the prefix sums are {0, 1, 0, 0, 2, 1}. (there are n + 1 prefix sums because of an empty prefix). Now you can just take a look at the first and last occurrence of 0, 1 and 2.
As a non-CS person, this is interesting. First approach of mine was simply to calc the running sum mod 3. You'll get a sequence of {0,1,2}. Now look for the first and the last 0, the first and the last 1 and the first and the last 2, and compare their respective distances...
Iterate through the array, summing the total as you go. Record the position of the first position where the modulo sum is 0. Also, record the position of he first position where the modulo sum is 1. And, finally, record the position of he first position where the modulo sum is 2.
Do the same thing backwards also, recording the last position where the modulo sum is 0, 1, and 2. That gives three possibilities for the longest sequence - you just check which pair are farthest apart.
You apply dynamic programming.
For every position you compute 3 values:
The longest sequence ending in that position which has sum s = 0 mod 3
The longest sequence ending in that position which has sum s = 1 mod 3
The longest sequence ending in that position which has sum s = 2 mod 3
So given this value for position i you can easily compute the new ones for position i+1.

Generate a random integer from 0 to N-1 which is not in the list

You are given N and an int K[].
The task at hand is to generate a equal probabilistic random number between 0 to N-1 which doesn't exist in K.
N is strictly a integer >= 0.
And K.length is < N-1. And 0 <= K[i] <= N-1. Also assume K is sorted and each element of K is unique.
You are given a function uniformRand(int M) which generates uniform random number in the range 0 to M-1 And assume this functions's complexity is O(1).
Example:
N = 7
K = {0, 1, 5}
the function should return any random number { 2, 3, 4, 6 } with equal
probability.
I could get a O(N) solution for this : First generate a random number between 0 to N - K.length. And map the thus generated random number to a number not in K. The second step will take the complexity to O(N). Can it be done better in may be O(log N) ?
You can use the fact that all the numbers in K[] are between 0 and N-1 and they are distinct.
For your example case, you generate a random number from 0 to 3. Say you get a random number r. Now you conduct binary search on the array K[].
Initialize i = K.length/2.
Find K[i] - i. This will give you the number of numbers missing from the array in the range 0 to i.
For example K[2] = 5. So 3 elements are missing from K[0] to K[2] (2,3,4)
Hence you can decide whether you have to conduct the remaining search in the first part of array K or the next part. This is because you know r.
This search will give you a complexity of log(K.length)
EDIT: For example,
N = 7
K = {0, 1, 4} // modified the array to clarify the algorithm steps.
the function should return any random number { 2, 3, 5, 6 } with equal probability.
Random number generated between 0 and N-K.length = random{0-3}. Say we get 3. Hence we require the 4th missing number in array K.
Conduct binary search on array K[].
Initial i = K.length/2 = 1.
Now we see K[1] - 1 = 0. Hence no number is missing upto i = 1. Hence we search on the latter part of the array.
Now i = 2. K[2] - 2 = 4 - 2 = 2. Hence there are 2 missing numbers up to index i = 2. But we need the 4th missing element. So we again have to search in the latter part of the array.
Now we reach an empty array. What should we do now? If we reach an empty array between say K[j] & K[j+1] then it simply means that all elements between K[j] and K[j+1] are missing from the array K.
Hence all elements above K[2] are missing from the array, namely 5 and 6. We need the 4th element out of which we have already discarded 2 elements. Hence we will choose the second element which is 6.
Binary search.
The basic algorithm:
(not quite the same as the other answer - the number is only generated at the end)
Start in the middle of K.
By looking at the current value and it's index, we can determine the number of pickable numbers (numbers not in K) to the left.
Similarly, by including N, we can determine the number of pickable numbers to the right.
Now randomly go either left or right, weighted based on the count of pickable numbers on each side.
Repeat in the chosen subarray until the subarray is empty.
Then generate a random number in the range consisting of the numbers before and after the subarray in the array.
The running time would be O(log |K|), and, since |K| < N-1, O(log N).
The exact mathematics for number counts and weights can be derived from the example below.
Extension with K containing a bigger range:
Now let's say (for enrichment purposes) K can also contain values N or larger.
Then, instead of starting with the entire K, we start with a subarray up to position min(N, |K|), and start in the middle of that.
It's easy to see that the N-th position in K (if one exists) will be >= N, so this chosen range includes any possible number we can generate.
From here, we need to do a binary search for N (which would give us a point where all values to the left are < N, even if N could not be found) (the above algorithm doesn't deal with K containing values greater than N).
Then we just run the algorithm as above with the subarray ending at the last value < N.
The running time would be O(log N), or, more specifically, O(log min(N, |K|)).
Example:
N = 10
K = {0, 1, 4, 5, 8}
So we start in the middle - 4.
Given that we're at index 2, we know there are 2 elements to the left, and the value is 4, so there are 4 - 2 = 2 pickable values to the left.
Similarly, there are 10 - (4+1) - 2 = 3 pickable values to the right.
So now we go left with probability 2/(2+3) and right with probability 3/(2+3).
Let's say we went right, and our next middle value is 5.
We are at the first position in this subarray, and the previous value is 4, so we have 5 - (4+1) = 0 pickable values to the left.
And there are 10 - (5+1) - 1 = 3 pickable values to the right.
We can't go left (0 probability). If we go right, our next middle value would be 8.
There would be 2 pickable values to the left, and 1 to the right.
If we go left, we'd have an empty subarray.
So then we'd generate a number between 5 and 8, which would be 6 or 7 with equal probability.
This can be solved by basically solving this:
Find the rth smallest number not in the given array, K, subject to
conditions in the question.
For that consider the implicit array D, defined by
D[i] = K[i] - i for 0 <= i < L, where L is length of K
We also set D[-1] = 0 and D[L] = N
We also define K[-1] = 0.
Note, we don't actually need to construct D. Also note that D is sorted (and all elements non-negative), as the numbers in K[] are unique and increasing.
Now we make the following claim:
CLAIM: To find the rth smallest number not in K[], we need to find right most occurrence of r' in D (which occurs at position defined by j), where r' is the largest number in D, which is < r. Such an r' exists, because D[-1] = 0. Once we find such an r' (and j), the number we are looking for is r-r' + K[j].
Proof: Basically the definition of r' and j tells us that there are exactlyr' numbers missing from 0 to K[j], and more than r numbers missing from 0 to K[j+1]. Thus all the numbers from K[j]+1 to K[j+1]-1 are missing (and these missing are at least r-r' in number), and the number we seek is among them, given by K[j] + r-r'.
Algorithm:
In order to find (r',j) all we need to do is a (modified) binary search for r in D, where we keep moving to the left even if we find r in the array.
This is an O(log K) algorithm.
If you are running this many times, it probably pays to speed up your generation operation: O(log N) time just isn't acceptable.
Make an empty array G. Starting at zero, count upwards while progressing through the values of K. If a value isn't in K add it to G. If it is in K don't add it and progress your K pointer. (This relies on K being sorted.)
Now you have an array G which has only acceptable numbers.
Use your random number generator to choose a value from G.
This requires O(N) preparatory work and each generation happens in O(1) time. After N look-ups the amortized time of all operations is O(1).
A Python mock-up:
import random
class PRNG:
def __init__(self, K,N):
self.G = []
kptr = 0
for i in range(N):
if kptr<len(K) and K[kptr]==i:
kptr+=1
else:
self.G.append(i)
def getRand(self):
rn = random.randint(0,len(self.G)-1)
return self.G[rn]
prng=PRNG( [0,1,5], 7)
for i in range(20):
print prng.getRand()

How to find number of expected swaps in bubble sort in better than O(n^2) time

I am stuck on problem http://www.codechef.com/JULY12/problems/LEBOBBLE
Here it is required to find number of expected swaps.
I tried an O(n^2) solution but it is timing out.
The code is like:
swaps = 0
for(i = 0;i < n-1;i++)
for(j = i+1;j<n;j++)
{
swaps += expected swap of A[i] and A[j]
}
Since probabilities of elements are varying, so every pair is needed to be compared. So according to me the above code snippet must be most efficient but it is timing out.
Can it be done in O(nlogn) or it any complexity better than O(n^2).
Give me any hint if possible.
Alright, let's think about this.
We realize that every number needs to be eventually swapped with every number after it that's less than it, sooner or later. Thus, the total number of swaps for a given number is the total number of numbers after it which are less than it. However, this is still O(n^2) time.
For every pass of the outer loop of bubble sort, one element gets put in the correct position. Without loss of generality, we'll say that for every pass, the largest element remaining gets sorted to the end of the list.
So, in the first pass of the outer loop, the largest number is put at the end. This takes q swaps, where q is the number of positions the number started away from the final position.
Thus, we can say that it will take q1+q2+ ... +qn swaps to complete this bubble sort. However, keep in mind that with every swap, one number will be taken either one position closer or one position farther away from their final positions. In our specific case, if a number is in front of a larger number, and at or in front of its correct position, one more swap will be required. However, if a number is behind a larger number and behind it's correct position, one less swap will be required.
We can see that this is true with the following example:
5 3 1 2 4
=> 3 5 1 2 4
=> 3 1 5 2 4
=> 3 1 2 5 4
=> 3 1 2 4 5
=> 1 3 2 4 5
=> 1 2 3 4 5 (6 swaps total)
"5" moves 4 spaces. "3" moves 1 space. "1" moves 2 spaces. "2" moves 2 spaces. "4" moves 1 space. Total: 10 spaces.
Note that 3 is behind 5 and in front of its correct position. Thus one more swap will be needed. 1 and 2 are behind 3 and 5 -- four less swaps will be needed. 4 is behind 5 and behind its correct position, thus one less swap will be needed. We can see now that the expected value of 6 matches the actual value.
We can compute Σq by sorting the list first, keeping the original positions of each of the elements in memory while doing the sort. This is possible in O(nlogn + n) time.
We can also see what numbers are behind what other numbers, but this is impossible to do in faster than O(n^2) time. However, we can get a faster solution.
Every swap effectively moves two numbers number needs to their correct positions, but some swaps actually do nothing, because one be eventually swapped with every number gets closerafter it that's less than it, and another gets farthersooner or later. The first swap in our previous exampleThus, between "3" and "5" is the only example of this in our example.
We have to calculate how many total number of said swaps that there are. This is left as an exercise to the reader, but here's one last hint: you only have to loop through the first half of the list. Though this for a given number is still, in the end O(n^2), we only have to do O(n^2) operations on the first half total number of the list, making numbers after it much faster overall.
Use divide and conquer
divide: size of sequence n to two lists of size n/2
conquer: count recursively two lists
combine: this is a trick part (to do it in linear time)
combine use merge-and-count. Suppose the two lists are A, B. They are already sorted. Produce an output list L from A, B while also counting the number of inversions, (a,b) where a is-in A, b is-in B and a > b.
The idea is similar to "merge" in merge-sort. Merge two sorted lists into one output list, but we also count the inversion.
Everytime a_i is appended to the output, no new inversions are encountered, since a_i is smaller than everything left in list B. If b_j is appended to the output, then it is smaller than all the remaining items in A, we increase the number of count of inversions by the number of elements remaining in A.
merge-and-count(A,B)
; A,B two input lists (sorted)
; C output list
; i,j current pointers to each list, start at beginning
; a_i, b_j elements pointed by i, j
; count number of inversion, initially 0
while A,B != empty
append min(a_i,b_j) to C
if b_j < a_i
count += number of element remaining in A
j++
else
i++
; now one list is empty
append the remainder of the list to C
return count, C
With merge-and-count, we can design the count inversion algorithm as follows:
sort-and-count(L)
if L has one element return 0
else
divide L into A, B
(rA, A) = sort-and-count(A)
(rB, B) = sort-and-count(B)
(r, L) = merge-and-count(A,B)
return r = rA+rB+r, L
T(n) = O(n lg n)

Resources