Why does this simple shuffle algorithm produce biased results? - algorithm

It seems that this simple shuffle algorithm will produce biased results:
# suppose $arr is filled with 1 to 52
for ($i < 0; $i < 52; $i++) {
$j = rand(0, 51);
# swap the items
$tmp = $arr[j];
$arr[j] = $arr[i];
$arr[i] = $tmp;
}
You can try it... instead of using 52, use 3 (suppose only 3 cards are used), and run it 10,000 times and tally up the results, you will see that the results are skewed towards certain patterns...
The question is... what is a simple explanation for why it will happen?
The correct solution is to use something like
for ($i < 0; $i < 51; $i++) { # last card need not swap
$j = rand($i, 51); # don't touch the cards that already "settled"
# swap the items
$tmp = $arr[j];
$arr[j] = $arr[i];
$arr[i] = $tmp;
}
But the question is... why does the first method, seemingly also totally random, make the results biased?
Update 1: thanks for folks here pointing out that it needs to be rand($i, 51) for it to shuffle correctly.

See this:
The Danger of Naïveté (Coding Horror)
Let's look at your three card deck as an example. Using a 3 card deck, there are only 6 possible orders for the deck after a shuffle: 123, 132, 213, 231, 312, 321.
With your 1st algorithm there are 27 possible paths (outcomes) for the code, depending on the results of the rand() function at different points. Each of these outcomes are equally likely (unbiased). Each of these outcomes will map to the same single result from the list of 6 possible "real" shuffle results above. We now have 27 items and 6 buckets to put them in. Since 27 is not evenly divisible by 6, some of those 6 combinations must be over-represented.
With the 2nd algorithm there are 6 possible outcomes that map exactly to the 6 possible "real" shuffle results, and they should all be represented equally over time.
This is important because the buckets that are over-represented in the first algorithm are not random. The buckets selected for the bias are repeatable and predictable. So if you're building an online poker game and use the 1st algorithm a hacker could figure out you used the naive sort and from that work out that certain deck arrangements are much more likely to occur than others. Then they can place bets accordingly. They'll lose some, but they'll win much more than they lose and quickly put you out of business.

Here's the complete probability tree for these replacements.
Let's assume that you start with the sequence 123, and then we'll enumerate all the various ways to produce random results with the code in question.
123
+- 123 - swap 1 and 1 (these are positions,
| +- 213 - swap 2 and 1 not numbers)
| | +- 312 - swap 3 and 1
| | +- 231 - swap 3 and 2
| | +- 213 - swap 3 and 3
| +- 123 - swap 2 and 2
| | +- 321 - swap 3 and 1
| | +- 132 - swap 3 and 2
| | +- 123 - swap 3 and 3
| +- 132 - swap 2 and 3
| +- 231 - swap 3 and 1
| +- 123 - swap 3 and 2
| +- 132 - swap 3 and 3
+- 213 - swap 1 and 2
| +- 123 - swap 2 and 1
| | +- 321 - swap 3 and 1
| | +- 132 - swap 3 and 2
| | +- 123 - swap 3 and 3
| +- 213 - swap 2 and 2
| | +- 312 - swap 3 and 1
| | +- 231 - swap 3 and 2
| | +- 213 - swap 3 and 3
| +- 231 - swap 2 and 3
| +- 132 - swap 3 and 1
| +- 213 - swap 3 and 2
| +- 231 - swap 3 and 3
+- 321 - swap 1 and 3
+- 231 - swap 2 and 1
| +- 132 - swap 3 and 1
| +- 213 - swap 3 and 2
| +- 231 - swap 3 and 3
+- 321 - swap 2 and 2
| +- 123 - swap 3 and 1
| +- 312 - swap 3 and 2
| +- 321 - swap 3 and 3
+- 312 - swap 2 and 3
+- 213 - swap 3 and 1
+- 321 - swap 3 and 2
+- 312 - swap 3 and 3
Now, the fourth column of numbers, the one before the swap information, contains the final outcome, with 27 possible outcomes.
Let's count how many times each pattern occurs:
123 - 4 times
132 - 5 times
213 - 5 times
231 - 5 times
312 - 4 times
321 - 4 times
=============
27 times total
If you run the code that swaps at random for an infinite number of times, the patterns 132, 213 and 231 will occur more often than the patterns 123, 312, and 321, simply because the way the code swaps makes that more likely to occur.
Now, of course, you can say that if you run the code 30 times (27 + 3), you could end up with all the patterns occuring 5 times, but when dealing with statistics you have to look at the long term trend.
Here's C# code that explores the randomness for one of each possible pattern:
class Program
{
static void Main(string[] args)
{
Dictionary<String, Int32> occurances = new Dictionary<String, Int32>
{
{ "123", 0 },
{ "132", 0 },
{ "213", 0 },
{ "231", 0 },
{ "312", 0 },
{ "321", 0 }
};
Char[] digits = new[] { '1', '2', '3' };
Func<Char[], Int32, Int32, Char[]> swap = delegate(Char[] input, Int32 pos1, Int32 pos2)
{
Char[] result = new Char[] { input[0], input[1], input[2] };
Char temp = result[pos1];
result[pos1] = result[pos2];
result[pos2] = temp;
return result;
};
for (Int32 index1 = 0; index1 < 3; index1++)
{
Char[] level1 = swap(digits, 0, index1);
for (Int32 index2 = 0; index2 < 3; index2++)
{
Char[] level2 = swap(level1, 1, index2);
for (Int32 index3 = 0; index3 < 3; index3++)
{
Char[] level3 = swap(level2, 2, index3);
String output = new String(level3);
occurances[output]++;
}
}
}
foreach (var kvp in occurances)
{
Console.Out.WriteLine(kvp.Key + ": " + kvp.Value);
}
}
}
This outputs:
123: 4
132: 5
213: 5
231: 5
312: 4
321: 4
So while this answer does in fact count, it's not a purely mathematical answer, you just have to evaluate all possible ways the random function can go, and look at the final outputs.

From your comments on the other answers, it seems that you are looking not just for an explanation of why the distribution is not the uniform distribution (for which the divisibility answer is a simple one) but also an "intuitive" explanation of why it is actually far from uniform.
Here's one way of looking at it. Suppose you start with the initial array [1, 2, ..., n] (where n might be 3, or 52, or whatever) and apply one of the two algorithms. If all permutations are uniformly likely, then the probability that 1 remains in the first position should be 1/n. And indeed, in the second (correct) algorithm, it is 1/n, as 1 stays in its place if and only if it is not swapped the first time, i.e. iff the initial call to rand(0,n-1) returns 0.
However, in the first (wrong) algorithm, 1 remains untouched only if it is neither swapped the first time nor any other time — i.e., only if the first rand returns 0 and none of the other rands returns 0, the probability of which is (1/n) * (1-1/n)^(n-1) ≈ 1/(ne) ≈ 0.37/n, not 1/n.
And that's the "intuitive" explanation: in your first algorithm, earlier items are much more likely to be swapped out of place than later items, so the permutations you get are skewed towards patterns in which the early items are not in their original places.
(It's a bit more subtle than that, e.g. 1 can get swapped into a later position and still end up getting swapped back into place through a complicated series of swaps, but those probabilities are relatively less significant.)

The best explanation I've seen for this effect was from Jeff Atwood on his CodingHorror blog (The Danger of Naïveté).
Using this code to simulate a 3-card random shuffle...
for (int i = 0; i < cards.Length; i++)
{
int n = rand.Next(cards.Length);
Swap(ref cards[i], ref cards[n]);
}
...you get this distribution.
(source: typepad.com)
The shuffle code (above) results in 3^3 (27) possible deck combinations. But the mathematics tell us that there are really only 3! or 6 possible combinations of a 3 card deck. So some of the combinations are over-represented.
You would need to use a Fisher-Yates shuffle to properly (randomly) shuffle a deck of cards.

Here's another intuition: the single shuffle swap can't create symmetry in the probability of occupying a position unless at least 2-way symmetry already exists. Call the three positions A, B, and C. Now let a be the probability of card 2 being in position A, b be the probability of card 2 being in position B, and c be the probability of it being in position C, prior to a swap move. Assume that no two probabilities are the same: a!=b, b!=c, c!=a. Now compute the probabilities a', b', and c' of the card being in these three positions following a swap. Let's say that this swap move consists of position C being swapped with one of the three positions at random. Then:
a' = a*2/3 + c*1/3
b' = b*2/3 + c*1/3
c' = 1/3.
That is, the probability that the card winds up in position A is the probability it was already there times the 2/3 of the time position A isn't involved in the swap, plus the probability that it was in position C times the 1/3 probability that C swapped with A, etc. Now subtracting the first two equations, we get:
a' - b' = (a - b)*2/3
which means that because we assumed a!=b, then a'!=b' (though the difference will approach 0 over time, given enough swaps). But since a'+b'+c'=1, if a'!=b', then neither can be equal to c' either, which is 1/3. So if the three probabilities start off all different before a swap, they will also all be different after a swap. And this would hold no matter which position was swapped - we just interchange the roles of the variables in the above.
Now the very first swap started by swapping card 1 in position A with one of the others. In this case, there was two way symmetry before the swap, because the probability of card 1 in position B = probability of card 1 in position C = 0. So in fact, card 1 can wind up with symmetric probabilities and it does end up in each of the three positions with equal probability. This remains true for all subsequent swaps. But card 2 winds up in the three positions after the first swap with probability (1/3, 2/3, 0), and likewise card 3 winds up in the three positions with probability (1/3, 0, 2/3). So no matter how many subsequent swaps we do, we will never wind up with card 2 or 3 having exactly the same probability of occupying all three positions.

See the Coding Horror post The Danger of Naïveté.
Basically (suposing 3 cards):
The naive shuffle results in 33 (27)
possible deck combinations. That's
odd, because the mathematics tell us
that there are really only 3! or 6
possible combinations of a 3 card
deck. In the KFY shuffle, we start
with an initial order, swap from the
third position with any of the three
cards, then swap again from the second
position with the remaining two cards.

The simple answer is that there are 52^52 possible ways for this algorithm to run, but there are only 52! possible arrangements of 52 cards. For the algorithm to be fair, it needs to produce each of these arrangements equally likely. 52^52 is not an integer multiple of 52!. Therefore, some arrangements must be more likely than others.

an illustrative approach might be this:
1) consider only 3 cards.
2) for the algorithm to give evenly distributed results, the chance of "1" ending up as a[0] must be 1/3, and the chance of "2" ending up in a[1] must be 1/3 too, and so forth.
3) so if we look at the second algorithm:
probability that "1" ends up at a[0]:
when 0 is the random number generated,
so 1 case out of (0,1,2), therefore,
is 1 out of 3 = 1/3
probability that "2" ends up at a[1]:
when it didn't get swapped to a[0] the
first time, and it didn't get swapped
to a[2] the second time: 2/3 * 1/2 =
1/3
probability that "3" ends up at a[2]:
when it didn't get swapped to a[0] the
first time, and it didn't get swapped
to a[1] the second time: 2/3 * 1/2 =
1/3
they are all perfectly 1/3, and we
don't see any error here.
4) if we try to calculate the probability of of "1" ending up as a[0] in the first algorithm, the calculation will be a bit long, but as the illustration in lassevk's answer shows, it is 9/27 = 1/3, but "2" ending up as a[1] has a chance of 8/27, and "3" ending up as a[2] has a chance of 9/27 = 1/3.
as a result, "2" ending up as a[1] is not 1/3 and therefore the algorithm will produce pretty skewed result (about 3.7% error, as opposed to any negligible case such as 3/10000000000000 = 0.00000000003%)
5) the proof that Joel Coehoorn has, actually can prove that some cases will be over-represented. I think the explanation that why it is n^n is this: at each iteration, there are n possibility that the random number can be, so after n iterations, there can be n^n cases = 27. This number doesn't divid the number of permuations (n! = 3! = 6) evenly in the case of n = 3, so some results are over-represented. they are over-represented in a way that instead of showing up 4 times, it shows up 5 times, so if you shuffle the cards millions of times from the initial order of 1 to 52, the over-represented case will show up 5 million times as opposed to 4 million times, which is quite big a difference.
6) i think the over-representation is shown, but "why" will the over-representation happen?
7) an ultimate test for the algorithm to be correct is that any number has a 1/n probability to end up at any slot.

The Naive algorithm picks the values of n like so:
n = rand(3)
n = rand(3)
n = rand(3)
3^3 possible combinations of n
1,1,1, 1,1,2....3,3,2 3,3,3 (27 combinations) lassevk's answer shows the distribution among the cards of these combinations.
the better algorithm does:
n = rand(3)
n = rand(2)
n! possible combinations of n
1,1, 1,2, 2,1 2,2 3,1 3,2 (6 combinations, all of them giving a different result)
As mentioned in the other answers, if you take 27 attempts to get 6 results, you cannot possibly attain the 6 results with even distribution, since 27 is not divisible by 6. Put 27 marbles into 6 buckets and no matter what you do, some buckets will have more marbles than others, the best you can do is 4,4,4,5,5,5 marbles for buckets 1 through 6.
the fundamental problem with the naive shuffle is that swaps too many times, to shuffle 3 cards completely, you need only do 2 swaps, and the second swap need only be among the first two cards, since the 3rd card already had a 1/3 chance of being swapped. to continue to swap cards will impart more chances that a given card will be swapped, and these chances will only even out to 1/3, 1/3, 1/3 if your total swap combinations is divisible by 6.

Not that another answer is needed, but I found it worthwhile to try to work out exactly why Fisher-Yates is uniform.
If we are talking about a deck with N items, then this question is: how can we show that
Pr(Item i ends up in slot j) = 1/N?
Breaking it down with conditional probabilities, Pr(item i ends up at slot j) is equal to
Pr(item i ends up at slot j | item i was not chosen in the first j-1 draws)
* Pr(item i was not chosen in the first j-1 draws).
and from there it expands recursively back to the first draw.
Now, the probability that element i was not drawn on the first draw is N-1 / N. And the probability that it was not drawn on the second draw conditional on the fact that it was not drawn on the first draw is N-2 / N-1 and so on.
So, we get for the probability that element i was not drawn in the first j-1 draws:
(N-1 / N) * (N-2 / N-1) * ... * (N-j / N-j+1)
and of course we know that the probability that it is drawn at round j conditional on not having been drawn earlier is just 1 / N-j.
Notice that in the first term, the numerators all cancel the subsequent denominators (i.e. N-1 cancels, N-2 cancels, all the way to N-j+1 cancels, leaving just N-j / N).
So the overall probability of element i appearing in slot j is:
[(N-1 / N) * (N-2 / N-1) * ... * (N-j / N-j+1)] * (1 / N-j)
= 1/N
as expected.
To get more general about the "simple shuffle", the particular property that it is lacking is called exchangeability. Because of the "path dependence" of the way the shuffle is created (i.e. which of the 27 paths is followed to create the output), you are not able to treat the different component-wise random variables as though they can appear in any order. In fact, this is perhaps the motivating example for why exchangeability matters in random sampling.

The clearest answer to show the first algorithm fails is to view the algorithm in question as a Markov chain of n steps on the graph of n! vertices of all the permutation of n natural numbers. The algorithm hops from one vertex to another with a transition probability. The first algorithm gives the transition probability of 1/n for each hop. There are n^n paths the probability of each of which is 1/n^n. Suppose the final probability of landing on each vertex is 1/n! which is a reduced fraction. To achieve that there must be m paths with the same final vertex such that m/n^n=1/n! or n^n = mn! for some natural number m, or that n^n is divisible by n!. But that is impossible. Otherwise, n has to be divisible by n-1 which is only possible when n=2. We have contradiction.

Related

Fast calculation of probability distribution in board game Da Vinci Code

I'm interested in efficiently calculating the probability distribution over possible secret numbers given what one can observe of the opponents' hand (and your own hand) in the board game Da Vinci Code. A link to the game here: https://boardgamegeek.com/boardgame/8946/da-vinci-code
I have abstracted the problem into the following:
You are given an array A of length N and a finite set of numbers Si for each index i of the array. Now,
we are to place a number from Si at each index i to fill the entire array A;
while ensuring that the number is unique across the entire array A;
and for 3 disjoint subarrays A1, A2, A3 of A such that concat(A1, A2, A3) = A, the numbers in each subarray must follow a strictly increasing order;
given all the possible numbers to form A that satisfy the above constraints, what is the probability ditribution over each number at each index?
Here I provide an example below:
Assuming we have the following array of length 5 with each column representing Si at the index of the column
| 6 6 | 6 6 | 6 |
| 5 | 5 | |
| 4 4 | | 4 |
| | 3 3 | |
| 2 | 2 2 | |
| 1 1 | | |
| ___ | __ | _ |
| A1 | A2 | A3|
The set of all possible arrays are:
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
Therefore the probability distribution over each number [1-6] at each index is:
6 0 4/14 0 3/14 6/14
5 0 6/14 0 6/14 0
4 1/14 4/14 0 0 8/14
3 0 0 6/14 5/14 0
2 3/14 0 8/14 0 0
1 10/14 0 0 0 0
___________ __________ ______
A1 A2 A3
Brute forcing this problem is obviously doable but I have a gut feeling that there must be some more efficient algorithms for this.
The reason why I think so is due to the fact that one can derive the probability distribution from the set of all possibilities but not the other way around, so the distribution itself must contain less information than the set of all possibilities have. Therefore, I believe that we do not need to generate all possibilites just to obtain the probability distribution.
Hence, I am wondering if there is any smart matrix operation we could use for this problem or even fixed-point iteration/density evolution to approximate the end probability distribution? Some other potentially more efficient approaches to this problem are also appreciated.
Edit: By brute-force, I mean specifically enumerating all possibilities with constraint propagation like in sudoku. My hope is to obtain an accurate solution, or a approximate solution that approximates well (better than plain monte carlo), that works better than CP in terms of running time.
Edit2: The better solution I desire should have the characteristic that it does not need to generate all possibilities to obtain or approximate the probability distribution.
Did you consider Constraint Propagation?
When you assign a number to a position, that number cannot appear in any other position, so exclude that number from the remaining positions
When you assign a number in the first column of a subarray, the second column must contain a larger value, so exclude all values that are lower or equal
With a BF approach in your example the code would generate and check 4 * 4 * 3 * 4 * 2 = 384 possibilities; with the CP approach we only generate 65 possibilities.
Here is a sample Python implementation:
from dataclasses import dataclass, field
from typing import Dict, List
#dataclass
class DaVinci:
grid : List[List[int]]
top : int
lastcol : int = 0
solved : List = field(default_factory=list)
count : int = 0
distrib : List[Dict[int,int]] = field(init=False)
def __post_init__(self):
self.lastcol = len(self.grid)-1
self.distrib = [{x:0 for x in range(1,self.top+1)} for y in range(len(self.grid))]
self.solve_next(current = 0, even = True, blocked = [], minval = 0, solving = [])
self.count = len(self.solved)
def solve_next(self, current, even, blocked, minval, solving):
found = False
for n in self.grid[current]:
if n not in blocked and n > minval:
if current != self.lastcol:
self.solve_next(current + 1, not even, blocked + [n], n * even, solving + [n])
else:
for col in range(self.lastcol):
self.distrib[col][solving[col]] += 1
self.distrib[self.lastcol][n] += 1
self.solved.append(solving + [n])
def show_solved(self):
for sol in self.solved:
print(''.join(map(str,sol)))
def show_distrib(self):
for i in range(1, self.top+1):
print(i, end = ' ')
for col in range(len(self.grid)):
print(f'{self.distrib[col][i]:2d}/{self.count}', end = ' ')
print()
dv = DaVinci([[1,2,4,6],[1,4,5,6],[2,3,6],[2,3,5,6],[4,6]], 6)
dv.show_solved()
14236
14256
14356
15234
15236
15264
15364
16234
16254
16354
24356
25364
26354
45236
dv.show_distrib()
1 10/14 0/14 0/14 0/14 0/14
2 3/14 0/14 8/14 0/14 0/14
3 0/14 0/14 6/14 5/14 0/14
4 1/14 4/14 0/14 0/14 8/14
5 0/14 6/14 0/14 6/14 0/14
6 0/14 4/14 0/14 3/14 6/14
A simple idea to get an approximation for the distribution is to use a Monte Carlo approach.
Set a variable total: = 0 and a matrix M[N][Q] with all entries initially set to zero (Q is the total of numbers allowed).
Fix a positive integer K. Perform K iterations. At each iteration, for each i in [1..N], take a random element from Si and fill the array A. When the array A is all filled, verify in O(N) if it satisfies your conditions. If so, increment by one the variable total and iterate through the array, incrementing the matrix entries M[i][A[i]] by one, for i in [1..N].
In the end, iterate through all the elements of the matrix M in O(N Q) and divide its elements by total to get an approximation for the distribution.
Total time complexity is O(N (K + Q)).
You can also precalculate stuff to make the approximation more precise. For example, you can precalculate all increasing sequences in the groups A1, A2 and A3. Put them in arrays I1, I2, I3. Then, at each iteration, instead of taking random elements from each Si, you take random sequences from I1, I2 and I3 and verify if the concatenation has no repeated elements (in O(N)). If so, proceed as before. The total time complexity (apart from the expensive precalculation) remains O(N (K + Q)).
Start by converting all legal subarray selections into bitvectors.
E.g., for A2 we have [2,3], [2,5], [2,6], [3,5], [3,6]
[2,3] as a bitvector is 000110
[3,5] is 010100
Next, arrange your three subarrays by the number of bitvectors they have.
Next, put these in a hash for each subarray/member combination except the smallest subarray. Use the smallest set bit as the key.
E.g. For [2,3] in A2, we'd have {2 => 000110}
Note that the values of the map to be in an array since there will be multiple bitvectors for each index/element combo.
Finally,
For every bitvec of subarray_small:
For every non-set bit of that bitvec
Find the list that has that bit as a key in subarray_medium
For every bitvec in this list
Check if the inverse of (bitvec_small | bitvec_medium) is in the hash for subarray_large.
If it is, we have a valid arrangement; update your frequency counts.

Array size in Cycle leader iteration Algorithm [duplicate]

The cycle leader iteration algorithm is an algorithm for shuffling an array by moving all even-numbered entries to the front and all odd-numbered entries to the back while preserving their relative order. For example, given this input:
a 1 b 2 c 3 d 4 e 5
the output would be
a b c d e 1 2 3 4 5
This algorithm runs in O(n) time and uses only O(1) space.
One unusual detail of the algorithm is that it works by splitting the array up into blocks of size 3k+1. Apparently this is critical for the algorithm to work correctly, but I have no idea why this is.
Why is the choice of 3k + 1 necessary in the algorithm?
Thanks!
This is going to be a long answer. The answer to your question isn't simple and requires some number theory to fully answer. I've spent about half a day working through the algorithm and I now have a good answer, but I'm not sure I can describe it succinctly.
The short version:
Breaking the input into blocks of size 3k + 1 essentially breaks the input apart into blocks of size 3k - 1 surrounded by two elements that do not end up moving.
The remaining 3k - 1 elements in the block move according to an interesting pattern: each element moves to the position given by dividing the index by two modulo 3k.
This particular motion pattern is connected to a concept from number theory and group theory called primitive roots.
Because the number two is a primitive root modulo 3k, beginning with the numbers 1, 3, 9, 27, etc. and running the pattern is guaranteed to cycle through all the elements of the array exactly once and put them into the proper place.
This pattern is highly dependent on the fact that 2 is a primitive root of 3k for any k ≥ 1. Changing the size of the array to another value will almost certainly break this because the wrong property is preserved.
The Long Version
To present this answer, I'm going to proceed in steps. First, I'm going to introduce cycle decompositions as a motivation for an algorithm that will efficiently shuffle the elements around in the right order, subject to an important caveat. Next, I'm going to point out an interesting property of how the elements happen to move around in the array when you apply this permutation. Then, I'll connect this to a number-theoretic concept called primitive roots to explain the challenges involved in implementing this algorithm correctly. Finally, I'll explain why this leads to the choice of 3k + 1 as the block size.
Cycle Decompositions
Let's suppose that you have an array A and a permutation of the elements of that array. Following the standard mathematical notation, we'll denote the permutation of that array as σ(A). We can line the initial array A up on top of the permuted array σ(A) to get a sense for where every element ended up. For example, here's an array and one of its permutations:
A 0 1 2 3 4
σ(A) 2 3 0 4 1
One way that we can describe a permutation is just to list off the new elements inside that permutation. However, from an algorithmic perspective, it's often more helpful to represent the permutation as a cycle decomposition, a way of writing out a permutation by showing how to form that permutation by beginning with the initial array and then cyclically permuting some of its elements.
Take a look at the above permutation. First, look at where the 0 ended up. In σ(A), the element 0 ended up taking the place of where the element 2 used to be. In turn, the element 2 ended up taking the place of where the element 0 used to be. We denote this by writing (0 2), indicating that 0 should go where 2 used to be, and 2 should go were 0 used to be.
Now, look at the element 1. The element 1 ended up where 4 used to be. The number 4 then ended up where 3 used to be, and the element 3 ended up where 1 used to be. We denote this by writing (1 4 3), that 1 should go where 4 used to be, that 4 should go where 3 used to be, and that 3 should go where 1 used to be.
Combining these together, we can represent the overall permutation of the above elements as (0 2)(1 4 3) - we should swap 0 and 2, then cyclically permute 1, 4, and 3. If we do that starting with the initial array, we'll end up at the permuted array that we want.
Cycle decompositions are extremely useful for permuting arrays in place because it's possible to permute any individual cycle in O(C) time and O(1) auxiliary space, where C is the number of elements in the cycle. For example, suppose that you have a cycle (1 6 8 4 2). You can permute the elements in the cycle with code like this:
int[] cycle = {1, 6, 8, 4, 2};
int temp = array[cycle[0]];
for (int i = 1; i < cycle.length; i++) {
swap(temp, array[cycle[i]]);
}
array[cycle[0]] = temp;
This works by just swapping everything around until everything comes to rest. Aside from the space usage required to store the cycle itself, it only needs O(1) auxiliary storage space.
In general, if you want to design an algorithm that applies a particular permutation to an array of elements, you can usually do so by using cycle decompositions. The general algorithm is the following:
for (each cycle in the cycle decomposition algorithm) {
apply the above algorithm to cycle those elements;
}
The overall time and space complexity for this algorithm depends on the following:
How quickly can we determine the cycle decomposition we want?
How efficiently can we store that cycle decomposition in memory?
To get an O(n)-time, O(1)-space algorithm for the problem at hand, we're going to show that there's a way to determine the cycle decomposition in O(1) time and space. Since everything will get moved exactly once, the overall runtime will be O(n) and the overall space complexity will be O(1). It's not easy to get there, as you'll see, but then again, it's not awful either.
The Permutation Structure
The overarching goal of this problem is to take an array of 2n elements and shuffle it so that even-positioned elements end up at the front of the array and odd-positioned elements end up at the end of the array. Let's suppose for now that we have 14 elements, like this:
0 1 2 3 4 5 6 7 8 9 10 11 12 13
We want to shuffle the elements so that they come out like this:
0 2 4 6 8 10 12 1 3 5 7 9 11 13
There are a couple of useful observations we can have about the way that this permutation arises. First, notice that the first element does not move in this permutation, because even-indexed elements are supposed to show up in the front of the array and it's the first even-indexed element. Next, notice that the last element does not move in this permutation, because odd-indexed elements are supposed to end up at the back of the array and it's the last odd-indexed element.
These two observations, put together, means that if we want to permute the elements of the array in the desired fashion, we actually only need to permute the subarray consisting of the overall array with the first and last elements dropped off. Therefore, going forward, we are purely going to focus on the problem of permuting the middle elements. If we can solve that problem, then we've solved the overall problem.
Now, let's look at just the middle elements of the array. From our above example, that means that we're going to start with an array like this one:
Element 1 2 3 4 5 6 7 8 9 10 11 12
Index 1 2 3 4 5 6 7 8 9 10 11 12
We want to get the array to look like this:
Element 2 4 6 8 10 12 1 3 5 7 9 11
Index 1 2 3 4 5 6 7 8 9 10 11 12
Because this array was formed by taking a 0-indexed array and chopping off the very first and very last element, we can treat this as a one-indexed array. That's going to be critically important going forward, so be sure to keep that in mind.
So how exactly can we go about generating this permutation? Well, for starters, it doesn't hurt to take a look at each element and to try to figure out where it began and where it ended up. If we do so, we can write things out like this:
The element at position 1 ended up at position 7.
The element at position 2 ended up at position 1.
The element at position 3 ended up at position 8.
The element at position 4 ended up at position 2.
The element at position 5 ended up at position 9.
The element at position 6 ended up at position 3.
The element at position 7 ended up at position 10.
The element at position 8 ended up at position 4.
The element at position 9 ended up at position 11.
The element at position 10 ended up at position 5.
The element at position 11 ended up at position 12.
The element at position 12 ended up at position 6.
If you look at this list, you can spot a few patterns. First, notice that the final index of all the even-numbered elements is always half the position of that element. For example, the element at position 4 ended up at position 2, the element at position 12 ended up at position 6, etc. This makes sense - we pushed all the even elements to the front of the array, so half of the elements that came before them will have been displaced and moved out of the way.
Now, what about the odd-numbered elements? Well, there are 12 total elements. Each odd-numbered element gets pushed to the second half, so an odd-numbered element at position 2k+1 will get pushed to at least position 7. Its position within the second half is given by the value of k. Therefore, the elements at an odd position 2k+1 gets mapped to position 7 + k.
We can take a minute to generalize this idea. Suppose that the array we're permuting has length 2n. An element at position 2x will be mapped to position x (again, even numbers get halfed), and an element at position 2x+1 will be mapped to position n + 1 + x. Restating this:
The final position of an element at position p is determined as follows:
If p = 2x for some integer x, then 2x ↦ x
If p = 2x+1 for some integer x, then 2x+1 ↦ n + 1 + x
And now we're going to do something that's entirely crazy and unexpected. Right now, we have a piecewise rule for determining where each element ends up: we either divide by two, or we do something weird involving n + 1. However, from a number-theoretic perspective, there is a single, unified rule explaining where all elements are supposed to end up.
The insight we need is that in both cases, it seems like, in some way, we're dividing the index by two. For the even case, the new index really is formed by just dividing by two. For the odd case, the new index kinda looks like it's formed by dividing by two (notice that 2x+1 went to x + (n + 1)), but there's an extra term in there. In a number-theoretic sense, though, both of these really correspond to division by two. Here's why.
Rather than taking the source index and dividing by two to get the destination index, what if we take the destination index and multiply by two? If we do that, an interesting pattern emerges.
Suppose our original number was 2x. The destination is then x, and if we double the destination index to get back 2x, we end up with the source index.
Now suppose that our original number was 2x+1. The destination is then n + 1 + x. Now, what happens if we double the destination index? If we do that, we get back 2n + 2 + 2x. If we rearrange this, we can alternatively rewrite this as (2x+1) + (2n+1). In other words, we've gotten back the original index, plus an extra (2n+1) term.
Now for the kicker: what if all of our arithmetic is done modulo 2n + 1? In that case, if our original number was 2x + 1, then twice the destination index is (2x+1) + (2n+1) = 2x + 1 (modulo 2n+1). In other words, the destination index really is half of the source index, just done modulo 2n+1!
This leads us to a very, very interesting insight: the ultimate destination of each of the elements in a 2n-element array is given by dividing that number by two, modulo 2n+1. This means that there really is a nice, unified rule for determining where everything goes. We just need to be able to divide by two modulo 2n+1. It just happens to work out that in the even case, this is normal integer division, and in the odd case, it works out to taking the form n + 1 + x.
Consequently, we can reframe our problem in the following way: given a 1-indexed array of 2n elements, how do we permute the elements so that each element that was originally at index x ends up at position x/2 mod (2n+1)?
Cycle Decompositions Revisited
At this point, we've made quite a lot of progress. Given any element, we know where that element should end up. If we can figure out a nice way to get a cycle decomposition of the overall permutation, we're done.
This is, unfortunately, where things get complicated. Suppose, for example, that our array has 10 elements. In that case, we want to transform the array like this:
Initial: 1 2 3 4 5 6 7 8 9 10
Final: 2 4 6 8 10 1 3 5 7 9
The cycle decomposition of this permutation is (1 6 3 7 9 10 5 8 4 2). If our array has 12 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12
Final: 2 4 6 8 10 12 1 3 5 7 9 11
This has cycle decomposition (1 7 10 5 9 11 12 6 3 8 4 2 1). If our array has 14 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Final: 2 4 6 8 10 12 14 1 3 5 7 9 11 13
This has cycle decomposition (1 8 4 2)(3 9 12 6)(5 10)(7 11 13 14). If our array has 16 elements, we want to transform it like this:
Initial: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Final: 2 4 6 8 10 12 14 16 1 3 5 7 9 11 13 15
This has cycle decomposition (1 9 13 15 16 8 4 2)(3 10 5 11 14 7 12 6).
The problem here is that these cycles don't seem to follow any predictable patterns. This is a real problem if we're going to try to solve this problem in O(1) space and O(n) time. Even though given any individual element we can figure out what cycle contains it and we can efficiently shuffle that cycle, it's not clear how we figure out what elements belong to what cycles, how many different cycles there are, etc.
Primitive Roots
This is where number theory comes in. Remember that each element's new position is formed by dividing that number by two, modulo 2n+1. Thinking about this backwards, we can figure out which number will take the place of each number by multiplying by two modulo 2n+1. Therefore, we can think of this problem by finding the cycle decomposition in reverse: we pick a number, keep multiplying it by two and modding by 2n+1, and repeat until we're done with the cycle.
This gives rise to a well-studied problem. Suppose that we start with the number k and think about the sequence k, 2k, 22k, 23k, 24k, etc., all done modulo 2n+1. Doing this gives different patterns depending on what odd number 2n+1 you're modding by. This explains why the above cycle patterns seem somewhat arbitrary.
I have no idea how anyone figured this out, but it turns out that there's a beautiful result from number theory that talks about what happens if you take this pattern mod 3k for some number k:
Theorem: Consider the sequence 3s, 3s·2, 3s·22, 3s·23, 3s·24, etc. all modulo 3k for some k ≥ s. This sequence cycles through through every number between 1 and 3k, inclusive, that is divisible by 3s but not divisible by 3s+1.
We can try this out on a few examples. Let's work modulo 27 = 32. The theorem says that if we look at 3, 3 · 2, 3 · 4, etc. all modulo 27, then we should see all the numbers less than 27 that are divisible by 3 and not divisible by 9. Well, let'see what we get:
3 · 20 = 3 · 1 = 3 = 3 mod 27
3 · 21 = 3 · 2 = 6 = 6 mod 27
3 · 22 = 3 · 4 = 12 = 12 mod 27
3 · 23 = 3 · 8 = 24 = 24 mod 27
3 · 24 = 3 · 16 = 48 = 21 mod 27
3 · 25 = 3 · 32 = 96 = 15 mod 27
3 · 26 = 3 · 64 = 192 = 3 mod 27
We ended up seeing 3, 6, 12, 15, 21, and 24 (though not in that order), which are indeed all the numbers less than 27 that are divisible by 3 but not divisible by 9.
We can also try this working mod 27 and considering 1, 2, 22, 23, 24 mod 27, and we should see all the numbers less than 27 that are divisible by 1 and not divisible by 3. In other words, this should give back all the numbers less than 27 that aren't divisible by 3. Let's see if that's true:
20 = 1 = 1 mod 27
21 = 2 = 2 mod 27
22 = 4 = 4 mod 27
23 = 8 = 8 mod 27
24 = 16 = 16 mod 27
25 = 32 = 5 mod 27
26 = 64 = 10 mod 27
27 = 128 = 20 mod 27
28 = 256 = 13 mod 27
29 = 512 = 26 mod 27
210 = 1024 = 25 mod 27
211 = 2048 = 23 mod 27
212 = 4096 = 19 mod 27
213 = 8192 = 11 mod 27
214 = 16384 = 22 mod 27
215 = 32768 = 17 mod 27
216 = 65536 = 7 mod 27
217 = 131072 = 14 mod 27
218 = 262144 = 1 mod 27
Sorting these, we got back the numbers 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 25, 26 (though not in that order). These are exactly the numbers between 1 and 26 that aren't multiples of three!
This theorem is crucial to the algorithm for the following reason: if 2n+1 = 3k for some number k, then if we process the cycle containing 1, it will properly shuffle all numbers that aren't multiples of three. If we then start the cycle at 3, it will properly shuffle all numbers that are divisible by 3 but not by 9. If we then start the cycle at 9, it will properly shuffle all numbers that are divisible by 9 but not by 27. More generally, if we use the cycle shuffle algorithm on the numbers 1, 3, 9, 27, 81, etc., then we will properly reposition all the elements in the array exactly once and will not have to worry that we missed anything.
So how does this connect to 3k + 1? Well, we need to have that 2n + 1 = 3k, so we need to have that 2n = 3k - 1. But remember - we dropped the very first and very last element of the array when we did this! Adding those back in tells us that we need blocks of size 3k + 1 for this procedure to work correctly. If the blocks are this size, then we know for certain that the cycle decomposition will consist of a cycle containing 1, a nonoverlapping cycle containing 3, a nonoverlapping cycle containing 9, etc. and that these cycles will contain all the elements of the array. Consequently, we can just start cycling 1, 3, 9, 27, etc. and be absolutely guaranteed that everything gets shuffled around correctly. That's amazing!
And why is this theorem true? It turns out that a number k for which 1, k, k2, k3, etc. mod pn that cycles through all the numbers that aren't multiples of p (assuming p is prime) is called a primitive root of the number pn. There's a theorem that says that 2 is a primitive root of 3k for all numbers k, which is why this trick works. If I have time, I'd like to come back and edit this answer to include a proof of this result, though unfortunately my number theory isn't at a level where I know how to do this.
Summary
This problem was tons of fun to work on. It involves cute tricks with dividing by two modulo an odd numbers, cycle decompositions, primitive roots, and powers of three. I'm indebted to this arXiv paper which described a similar (though quite different) algorithm and gave me a sense for the key trick behind the technique, which then let me work out the details for the algorithm you described.
Hope this helps!
Here is most of the mathematical argument missing from templatetypedef’s
answer. (The rest is comparatively boring.)
Lemma: for all integers k >= 1, we have
2^(2*3^(k-1)) = 1 + 3^k mod 3^(k+1).
Proof: by induction on k.
Base case (k = 1): we have 2^(2*3^(1-1)) = 4 = 1 + 3^1 mod 3^(1+1).
Inductive case (k >= 2): if 2^(2*3^(k-2)) = 1 + 3^(k-1) mod 3^k,
then q = (2^(2*3^(k-2)) - (1 + 3^(k-1)))/3^k.
2^(2*3^(k-1)) = (2^(2*3^(k-2)))^3
= (1 + 3^(k-1) + 3^k*q)^3
= 1 + 3*(3^(k-1)) + 3*(3^(k-1))^2 + (3^(k-1))^3
+ 3*(1+3^(k-1))^2*(3^k*q) + 3*(1+3^(k-1))*(3^k*q)^2 + (3^k*q)^3
= 1 + 3^k mod 3^(k+1).
Theorem: for all integers i >= 0 and k >= 1, we have
2^i = 1 mod 3^k if and only if i = 0 mod 2*3^(k-1).
Proof: the “if” direction follows from the Lemma. If
i = 0 mod 2*3^(k-1), then
2^i = (2^(2*3^(k-1)))^(i/(2*3^(k-1)))
= (1+3^k)^(i/(2*3^(k-1))) mod 3^(k+1)
= 1 mod 3^k.
The “only if” direction is by induction on k.
Base case (k = 1): if i != 0 mod 2, then i = 1 mod 2, and
2^i = (2^2)^((i-1)/2)*2
= 4^((i-1)/2)*2
= 2 mod 3
!= 1 mod 3.
Inductive case (k >= 2): if 2^i = 1 mod 3^k, then
2^i = 1 mod 3^(k-1), and the inductive hypothesis implies that
i = 0 mod 2*3^(k-2). Let j = i/(2*3^(k-2)). By the Lemma,
1 = 2^i mod 3^k
= (1+3^(k-1))^j mod 3^k
= 1 + j*3^(k-1) mod 3^k,
where the dropped terms are divisible by (3^(k-1))^2, so
j = 0 mod 3, and i = 0 mod 2*3^(k-1).

minimum steps required to make array of integers contiguous

given a sorted array of distinct integers, what is the minimum number of steps required to make the integers contiguous? Here the condition is that: in a step , only one element can be changed and can be either increased or decreased by 1 . For example, if we have 2,4,5,6 then '2' can be made '3' thus making the elements contiguous(3,4,5,6) .Hence the minimum steps here is 1 . Similarly for the array: 2,4,5,8:
Step 1: '2' can be made '3'
Step 2: '8' can be made '7'
Step 3: '7' can be made '6'
Thus the sequence now is 3,4,5,6 and the number of steps is 3.
I tried as follows but am not sure if its correct?
//n is the number of elements in array a
int count=a[n-1]-a[0]-1;
for(i=1;i<=n-2;i++)
{
count--;
}
printf("%d\n",count);
Thanks.
The intuitive guess is that the "center" of the optimal sequence will be the arithmetic average, but this is not the case. Let's find the correct solution with some vector math:
Part 1: Assuming the first number is to be left alone (we'll deal with this assumption later), calculate the differences, so 1 12 3 14 5 16-1 2 3 4 5 6 would yield 0 -10 0 -10 0 -10.
sidenote: Notice that a "contiguous" array by your implied definition would be an increasing arithmetic sequence with difference 1. (Note that there are other reasonable interpretations of your question: some people may consider 5 4 3 2 1 to be contiguous, or 5 3 1 to be contiguous, or 1 2 3 2 3 to be contiguous. You also did not specify if negative numbers should be treated any differently.)
theorem: The contiguous numbers must lie between the minimum and maximum number. [proof left to reader]
Part 2: Now returning to our example, assuming we took the 30 steps (sum(abs(0 -10 0 -10 0 -10))=30) required to turn 1 12 3 14 5 16 into 1 2 3 4 5 6. This is one correct answer. But 0 -10 0 -10 0 -10+c is also an answer which yields an arithmetic sequence of difference 1, for any constant c. In order to minimize the number of "steps", we must pick an appropriate c. In this case, each time we increase or decrease c, we increase the number of steps by N=6 (the length of the vector). So for example if we wanted to turn our original sequence 1 12 3 14 5 16 into 3 4 5 6 7 8 (c=2), then the differences would have been 2 -8 2 -8 2 -8, and sum(abs(2 -8 2 -8 2 -8))=30.
Now this is very clear if you could picture it visually, but it's sort of hard to type out in text. First we took our difference vector. Imagine you drew it like so:
4|
3| *
2| * |
1| | | *
0+--+--+--+--+--*
-1| |
-2| *
We are free to "shift" this vector up and down by adding or subtracting 1 from everything. (This is equivalent to finding c.) We wish to find the shift which minimizes the number of | you see (the area between the curve and the x-axis). This is NOT the average (that would be minimizing the standard deviation or RMS error, not the absolute error). To find the minimizing c, let's think of this as a function and consider its derivative. If the differences are all far away from the x-axis (we're trying to make 101 112 103 114 105 116), it makes sense to just not add this extra stuff, so we shift the function down towards the x-axis. Each time we decrease c, we improve the solution by 6. Now suppose that one of the *s passes the x axis. Each time we decrease c, we improve the solution by 5-1=4 (we save 5 steps of work, but have to do 1 extra step of work for the * below the x-axis). Eventually when HALF the *s are past the x-axis, we can NO LONGER IMPROVE THE SOLUTION (derivative: 3-3=0). (In fact soon we begin to make the solution worse, and can never make it better again. Not only have we found the minimum of this function, but we can see it is a global minimum.)
Thus the solution is as follows: Pretend the first number is in place. Calculate the vector of differences. Minimize the sum of the absolute value of this vector; do this by finding the median OF THE DIFFERENCES and subtracting that off from the differences to obtain an improved differences-vector. The sum of the absolute value of the "improved" vector is your answer. This is O(N) The solutions of equal optimality will (as per the above) always be "adjacent". A unique solution exists only if there are an odd number of numbers; otherwise if there are an even number of numbers, AND the median-of-differences is not an integer, the equally-optimal solutions will have difference-vectors with corrective factors of any number between the two medians.
So I guess this wouldn't be complete without a final example.
input: 2 3 4 10 14 14 15 100
difference vector: 2 3 4 5 6 7 8 9-2 3 4 10 14 14 15 100 = 0 0 0 -5 -8 -7 -7 -91
note that the medians of the difference-vector are not in the middle anymore, we need to perform an O(N) median-finding algorithm to extract them...
medians of difference-vector are -5 and -7
let us take -5 to be our correction factor (any number between the medians, such as -6 or -7, would also be a valid choice)
thus our new goal is 2 3 4 5 6 7 8 9+5=7 8 9 10 11 12 13 14, and the new differences are 5 5 5 0 -3 -2 -2 -86*
this means we will need to do 5+5+5+0+3+2+2+86=108 steps
*(we obtain this by repeating step 2 with our new target, or by adding 5 to each number of the previous difference... but since you only care about the sum, we'd just add 8*5 (vector length times correct factor) to the previously calculated sum)
Alternatively, we could have also taken -6 or -7 to be our correction factor. Let's say we took -7...
then the new goal would have been 2 3 4 5 6 7 8 9+7=9 10 11 12 13 14 15 16, and the new differences would have been 7 7 7 2 1 0 0 -84
this would have meant we'd need to do 7+7+7+2+1+0+0+84=108 steps, the same as above
If you simulate this yourself, can see the number of steps becomes >108 as we take offsets further away from the range [-5,-7].
Pseudocode:
def minSteps(array A of size N):
A' = [0,1,...,N-1]
diffs = A'-A
medianOfDiffs = leftMedian(diffs)
return sum(abs(diffs-medianOfDiffs))
Python:
leftMedian = lambda x:sorted(x)[len(x)//2]
def minSteps(array):
target = range(len(array))
diffs = [t-a for t,a in zip(target,array)]
medianOfDiffs = leftMedian(diffs)
return sum(abs(d-medianOfDiffs) for d in diffs)
edit:
It turns out that for arrays of distinct integers, this is equivalent to a simpler solution: picking one of the (up to 2) medians, assuming it doesn't move, and moving other numbers accordingly. This simpler method often gives incorrect answers if you have any duplicates, but the OP didn't ask that, so that would be a simpler and more elegant solution. Additionally we can use the proof I've given in this solution to justify the "assume the median doesn't move" solution as follows: the corrective factor will always be in the center of the array (i.e. the median of the differences will be from the median of the numbers). Thus any restriction which also guarantees this can be used to create variations of this brainteaser.
Get one of the medians of all the numbers. As the numbers are already sorted, this shouldn't be a big deal. Assume that median does not move. Then compute the total cost of moving all the numbers accordingly. This should give the answer.
community edit:
def minSteps(a):
"""INPUT: list of sorted unique integers"""
oneMedian = a[floor(n/2)]
aTarget = [oneMedian + (i-floor(n/2)) for i in range(len(a))]
# aTargets looks roughly like [m-n/2?, ..., m-1, m, m+1, ..., m+n/2]
return sum(abs(aTarget[i]-a[i]) for i in range(len(a)))
This is probably not an ideal solution, but a first idea.
Given a sorted sequence [x1, x2, …, xn]:
Write a function that returns the differences of an element to the previous and to the next element, i.e. (xn – xn–1, xn+1 – xn).
If the difference to the previous element is > 1, you would have to increase all previous elements by xn – xn–1 – 1. That is, the number of necessary steps would increase by the number of previous elements × (xn – xn–1 – 1). Let's call this number a.
If the difference to the next element is >1, you would have to decrease all subsequent elements by xn+1 – xn – 1. That is, the number of necessary steps would increase by the number of subsequent elements × (xn+1 – xn – 1). Let's call this number b.
If a < b, then increase all previous elements until they are contiguous to the current element. If a > b, then decrease all subsequent elements until they are contiguous to the current element. If a = b, it doesn't matter which of these two actions is chosen.
Add up the number of steps taken in the previous step (by increasing the total number of necessary steps by either a or b), and repeat until all elements are contiguous.
First of all, imagine that we pick an arbitrary target of contiguous increasing values and then calculate the cost (number of steps required) for modifying the array the array to match.
Original: 3 5 7 8 10 16
Target: 4 5 6 7 8 9
Difference: +1 0 -1 -1 -2 -7 -> Cost = 12
Sign: + 0 - - - -
Because the input array is already ordered and distinct, it is strictly increasing. Because of this, it can be shown that the differences will always be non-increasing.
If we change the target by increasing it by 1, the cost will change. Each position in which the difference is currently positive or zero will incur an increase in cost by 1. Each position in which the difference is currently negative will yield a decrease in cost by 1:
Original: 3 5 7 8 10 16
New target: 5 6 7 8 9 10
New Difference: +2 +1 0 0 -1 -6 -> Cost = 10 (decrease by 2)
Conversely, if we decrease the target by 1, each position in which the difference is currently positive will yield a decrease in cost by 1, while each position in which the difference is zero or negative will incur an increase in cost by 1:
Original: 3 5 7 8 10 16
New target: 3 4 5 6 7 8
New Difference: 0 -1 -2 -2 -3 -8 -> Cost = 16 (increase by 4)
In order to find the optimal values for the target array, we must find a target such that any change (increment or decrement) will not decrease the cost. Note that an increment of the target can only decrease the cost when there are more positions with negative difference than there are with zero or positive difference. A decrement can only decrease the cost when there are more positions with a positive difference than with a zero or negative difference.
Here are some example distributions of difference signs. Remember that the differences array is non-increasing, so positives always have to be first and negatives last:
C C
+ + + - - - optimal
+ + 0 - - - optimal
0 0 0 - - - optimal
+ 0 - - - - can increment (negatives exceed positives & zeroes)
+ + + 0 0 0 optimal
+ + + + - - can decrement (positives exceed negatives & zeroes)
+ + 0 0 - - optimal
+ 0 0 0 0 0 optimal
C C
Observe that if one of the central elements (marked C) is zero, the target must be optimal. In such a circumstance, at best any increment or decrement will not change the cost, but it may increase it. This result is important, because it gives us a trivial solution. We pick a target such that a[n/2] remains unchanged. There may be other possible targets that yield the same cost, but there are definitely none that are better. Here's the original code modified to calculate this cost:
//n is the number of elements in array a
int targetValue;
int cost = 0;
int middle = n / 2;
int startValue = a[middle] - middle;
for (i = 0; i < n; i++)
{
targetValue = startValue + i;
cost += abs(targetValue - a[i]);
}
printf("%d\n",cost);
You can not do it by iterating once on the array, that's for sure.
You need first to check the difference between each two numbers, for example:
2,7,8,9 can be 2,3,4,5 with 18 steps or 6,7,8,9 with 4 steps.
Create a new array with the difference like so: for 2,7,8,9 it wiil be 4,1,1. Now you can decide whether to increase or decrease the first number.
Lets assume that the contiguous array looks something like this -
c c+1 c+2 c+3 .. and so on
Now lets take an example -
5 7 8 10
The contiguous array in this case will be -
c c+1 c+2 c+3
In order to get the minimum steps, the sum of the modulus of the difference of the integers(before and after) w.r.t the ith index should be the minimum. In which case,
(c-5)^2 + (c-6)^2 + (c-6)^2 + (c-7)^2 should be minimum
Let f(c) = (c-5)^2 + (c-6)^2 + (c-6)^2 + (c-7)^2
= 4c^2 - 48c + 146
Applying differential calculus to get the minima,
f'(c) = 8c - 48 = 0
=> c = 6
So our contiguous array is 6 7 8 9 and the minimum cost here is 2.
To sum it up, just generate f(c), get the first differential and find out c.
This should take O(n).
Brute force approach O(N*M)
If one draws a line through each point in the array a then y0 is a value where each line starts at index 0. Then the answer is the minimum among number of steps reqired to get from a to every line that starts at y0, in Python:
y0s = set((y - i) for i, y in enumerate(a))
nsteps = min(sum(abs(y-(y0+i)) for i, y in enumerate(a))
for y0 in xrange(min(y0s), max(y0s)+1)))
Input
2,4,5,6
2,4,5,8
Output
1
3

Finding the minimum and maximm element from one of many arrays

I received a question during an Amazon interview and would like assistance with solving it.
Given N arrays of size K each, each of these K elements in the N arrays are sorted, and each of these N*K elements are unique. Choose a single element from each of the N arrays, from the chosen subset of N elements. Subtract the minimum and maximum element. This difference should be the least possible minimum.
Sample:
N=3, K=3
N=1 : 6, 16, 67
N=2 : 11,17,68
N=3 : 10, 15, 100
here if 16, 17, 15 are chosen, we get the minimum difference as
17-15=2.
I can think of O(N*K*N)(edited after correctly pointed out by zivo, not a good solution now :( ) solution.
1. Take N pointer initially pointing to initial element each of N arrays.
6, 16, 67
^
11,17,68
^
10, 15, 100
^
2. Find out the highest and lowest element among the current pointer O(k) (6 and 11) and find the difference between them.(5)
3. Increment the pointer which is pointing to lowest element by 1 in that array.
6, 16, 67
^
11,17,68
^
10, 15, 100 (difference:5)
^
4. Keep repeating step 2 and 3 and store the minimum difference.
6, 16, 67
^
11,17,68
^
10,15,100 (difference:5)
^
6, 16, 67
^
11,17,68
^
10,15,100 (difference:2)
^
Above will be the required solution.
6, 16, 67
^
11,17,68
^
10,15,100 (difference:84)
^
6, 16, 67
^
11,17,68
^
10,15,100 (difference:83)
^
And so on......
EDIT:
Its complexity can be reduced by using a heap (as suggested by Uri). I thought of it but faced a problem: Each time an element is extracted from heap, its array number has to be found out in order to increment the corresponding pointer for that array. An efficient way to find array number can definitely reduce the complexity to O(K*N log(K*N)). One naive way is to use a data structure like this
Struct
{
int element;
int arraynumer;
}
and reconstruct the initial data like
6|0,16|0,67|0
11|1,17|1,68|1
10|2,15|2,100|2
Initially keep the current max for first column and insert the pointed elements in heap. Now each time an element is extracted, its array number can be found out, pointer in that array is incremented , the newly pointed element can be compared to current max and max pointer can be adjusted accordingly.
So here is an algorithm to do solve this problem in two steps:
First step is to merge all your arrays into one sorted array which would look like this:
combined_val[] - which holds all numbers
combined_ind[] - which holds index of which array did this number originally belonged to
this step can be done easily in O(K*N*log(N)) but i think you can do better than that too (maybe not, you can lookup variants of merge sort because they do step similar to that)
Now second step:
it is easier to just put code instead of explaining so here is the pseduocode:
int count[N] = { 0 }
int head = 0;
int diffcnt = 0;
// mindiff is initialized to overall maximum value - overall minimum value
int mindiff = combined_val[N * K - 1] - combined_val[0];
for (int i = 0; i &lt N * K; i++)
{
count[combined_ind[i]]++;
if (count[combined_ind[i]] == 1) {
// diffcnt counts how many arrays have at least one element between
// indexes of "head" and "i". Once diffcnt reaches N it will stay N and
// not increase anymore
diffcnt++;
} else {
while (count[combined_ind[head]] > 1) {
// We try to move head index as forward as possible while keeping diffcnt constant.
// i.e. if count[combined_ind[head]] is 1, then if we would move head forward
// diffcnt would decrease, that is something we dont want to do.
count[combined_ind[head]]--;
head++;
}
}
if (diffcnt == N) {
// i.e. we got at least one element from all arrays
if (combined_val[i] - combined_val[head] &lt mindiff) {
mindiff = combined_val[i] - combined_val[head];
// if you want to save actual numbers too, you can save this (i.e. i and head
// and then extract data from that)
}
}
}
the result is in mindiff.
The runing time of second step is O(N * K). This is because "head" index will move only N*K times maximum. so the inner loop does not make this quadratic, it is still linear.
So total algorithm running time is O(N * K * log(N)), however this is because of merging step, if you can come up with better merging step you can probably bring it down to O(N * K).
This problem is for managers
You have 3 developers (N1), 3 testers (N2) and 3 DBAs (N3)
Choose the less divergent team that can run a project successfully.
int[n] result;// where result[i] keeps the element from bucket N_i
int[n] latest;//where latest[i] keeps the latest element visited from bucket N_i
Iterate elements in (N_1 + N_2 + N_3) in sorted order
{
Keep track of latest element visited from each bucket N_i by updating 'latest' array;
if boundary(latest) < boundary(result)
{
result = latest;
}
}
int boundary(int[] array)
{
return Max(array) - Min(array);
}
I've O(K*N*log(K)), with typical execution much less. Currently cannot think anything better. I'll explain first the easier to describe (somewhat longer execution):
For each element f in the first array (loop through K elements)
For each array, starting from the second array (loop through N-1 arrays)
Do a binary search on the array, and find element closest to f. This is your element (Log(K))
This algorithm can be optimized, if for each array, you add a new Floor Index. When performent the binary search, search between 'Floor' to 'K-1'.
Initially Floor index is 0, and for first element you search through the entire arrays. Once you find an element closest to 'f', update the Floor Index with the index of that element. Worse case is the same (Floor may not update, if maximum element of first array is smaller than any other minimum), but average case will improve.
Correctness proof for the accepted answer (Terminal's solution)
Assume that the algorithm finds a series A=<A[1],A[2],...,A[N]> which isn't the optimal solution (R).
Consider the index j in R, such that item R[j] is the first item among R that the algorithm examines and replaces it with the next item in its row.
Let A' denote the candidate solution at that phase (prior to the replacement). Since R[j]=A'[j] is the minimum value of A', it's also the minimum of R.
Now, consider the maximum value of R, R[m]. If A'[m]<R[m], then R can be improved by replacing R[m] with A'[m], which contradicts the fact that R is optimal. Therefore, A'[m]=R[m].
In other words, R and A' share the same maximum and minimum, therefore they are equivalent. This completes the proof: if R is an optimal solution, then the algorithm is guaranteed to find a solution as good as R.
for every element in 1st array
choose the element in 2nd array that is closest to the element in 1st array
current_array = 2;
do
{
choose the element in current_array+1 that is closest to the element in current_array
current_array++;
} while(current_array < n);
complexity: O(k^2*n)
Here is my logic on how to resolve this issue, keeping in mind that we need to pick one element from each of the N arrays (to compute the least minimum)
// if we take the above values as an example!
// then the idea would be to sort all three arrays while keeping another
// array to keep the reference to their sets (1 or 2 or 3, could be
// extended to n sets)
1 3 2 3 1 2 1 2 3 // this is the array that holds the set index
6 10 11 15 16 17 67 68 100 // this is the sorted combined array.
| |
5 2 33 // this is the computed least minimum,
// the rule is to make sure the indexes of the values
// we are comparing are different (to make sure we are
// comparing elements from different sets), then for example
// the first element of that example is index:1|value:6 we hold
// that value 6 (that is the value we will be using to compute the least minimum,
// then we go to the edge of the comparison which would be the second different index,
// we skip index:3|value:10 (we remove it from the array) we compare index:2|value:11
// to index:1|value:6 we obtain 5 which would go to a variable named leastMinimum = 5,
// now we remove the indexes and values we already used,
// and redo the same steps.
Step 1:
1 3 2 3 1 2 1 2 3
6 10 11 15 16 17 67 68 100
|
5
leastMinumum = 5
Step 2:
3 1 2 1 2 3
15 16 17 67 68 100
|
2
leastMinimum = min(2, leastMinumum) // which is equal 2
Step 3:
1 2 3
67 68 100
33
leastMinimum = min(33, leastMinumum) // which is equal to old leastMinumum which is 2
Now: We suppose we have elements from the same array that are very close to each other (k=2 this time which means we only have 3 sets with two values) :
// After sorting the n arrays we will have the below indexes array and values array
1 1 2 3 2 3
6 7 8 12 15 16
* * *
* we skip second index of 1|7 and we take the least minimum of 1|6 and 3|12 (index:2|value:8 will be removed as it is not at the edges, we pick the minimum and maximum of the unique index subset of n elements)
1 3
6 12
=6
* second step we remove the values we already used, so the array become like below:
1 2 3
7 15 16
* * *
7 - 16
= 9
Note:
Another approach that consumes more memory would consist of creating N sub-arrays from which we would be comparing the maximum - minumum
So from the below sorted values array and its corresponding indexes array we extract three other sub arrays:
1 3 2 3 1 2 1 2 3
6 10 11 15 16 17 67 68 100
First Array:
1 3 2
6 10 11
11-6 = 5
Second Array:
3 1 2
15 15 17
17-15 = 2
Third Array:
1 2 3
67 68 100
100 - 67 = 33

How can I maximally partition a set?

I'm trying to solve one of the Project Euler problems. As a consequence, I need an algorithm that will help me find all possible partitions of a set, in any order.
For instance, given the set 2 3 3 5:
2 | 3 3 5
2 | 3 | 3 5
2 | 3 3 | 5
2 | 3 | 3 | 5
2 5 | 3 3
and so on. Pretty much every possible combination of the members of the set. I've searched the net of course, but haven't found much that's directly useful to me, since I speak programmer-ese not advanced-math-ese.
Can anyone help me out with this? I can read pretty much any programming language, from BASIC to Haskell, so post in whatever language you wish.
Have you considered a search tree? Each node would represent a choice of where to put an element and the leaf nodes are answers. I won't give you code because that's part of the fun of Project Euler ;)
Take a look at:
The Art of Computer Programming, Volume 4, Fascicle 3: Generating All Combinations and Partitions
7.2.1.5. Generating all set partitions
In general I would look at the structure of the recursion used to compute the number of configurations, and build a similar recursion for enumerating them. Best is to compute a one-to-one mapping between integers and configurations. This works well for permutations, combinations, etc. and ensures that each configuration is enumerated only once.
Now even the recursion for the number of partitions of some identical items is rather complicated.
For partitions of multisets the counting amounts to solving the generalization of Project Euler problem 181 to arbitrary multisets.
Well, the problem has two aspects.
Firsty, the items can be arranged in any order. So for N items, there are N! permutations (assuming the items are treated as unique).
Secondly, you can envision the grouping as a bit flag between each item indicating a divide. There would be N-1 of these flags, so for a given permutation there would be 2^(N-1) possible groupings.
This means that for N items, there would be a total of N!*(2^(N-1)) groupings/permutations, which gets big very very fast.
In your example, the top four items are groupings of one permutation. The last item is a grouping of another permutation. Your items can be viewed as :
2 on 3 off 3 off 5
2 on 3 on 3 off 5
2 on 3 off 3 on 5
2 on 3 on 3 on 5
2 off 5 on 3 off 3
The permutations (the order of display) can be derived by looking at them like a tree, as mentioned by the other two. This would almost certainly involve recursion, such as here.
The grouping is independent of them in many ways. Once you have all the permutations, you can link them with the groupings if needed.
Here is the code you need for this part of your problem:
def memoize(f):
memo={}
def helper(x):
if x not in memo:
memo[x]=f(x)
return memo[x]
return helper
#memoize
def A000041(n):
if n == 0: return 1
S = 0
J = n-1
k = 2
while 0 <= J:
T = A000041(J)
S = S+T if k//2%2!=0 else S-T
J -= k if k%2!=0 else k//2
k += 1
return S
print A000041(100) #the 100's number in this series, as an example
I quickly whipped up some code to do this. However, I left out separating every possible combination of the given list, because I wasn't sure it was actually needed, but it should be easy to add, if necessary.
Anyway, the code runs quite well for small amounts, but, as CodeByMoonlight already mentioned, the amount of possibilities gets really high really fast, so the runtime increases accordingly.
Anyway, here's the python code:
import time
def separate(toseparate):
"Find every possible way to separate a given list."
#The list of every possibility
possibilities = []
n = len(toseparate)
#We can distribute n-1 separations in the given list, so iterate from 0 to n
for i in xrange(n):
#Create a copy of the list to avoid modifying the already existing list
copy = list(toseparate)
#A boolean list indicating where a separator is put. 'True' indicates a separator
#and 'False', of course, no separator.
#The list will contain i separators, the rest is filled with 'False'
separators = [True]*i + [False]*(n-i-1)
for j in xrange(len(separators)):
#We insert the separators into our given list. The separators have to
#be between two elements. The index between two elements is always
#2*[index of the left element]+1.
copy.insert(2*j+1, separators[j])
#The first possibility is, of course, the one we just created
possibilities.append(list(copy))
#The following is a modification of the QuickPerm algorithm, which finds
#all possible permutations of a given list. It was modified to only permutate
#the spaces between two elements, so it finds every possibility to insert n
#separators in the given list.
m = len(separators)
hi, lo = 1, 0
p = [0]*m
while hi < m:
if p[hi] < hi:
lo = (hi%2)*p[hi]
copy[2*lo+1], copy[2*hi+1] = copy[2*hi+1], copy[2*lo+1]
#Since the items are non-unique, some possibilities will show up more than once, so we
#avoid this by checking first.
if not copy in possibilities:
possibilities.append(list(copy))
p[hi] += 1
hi = 1
else:
p[hi] = 0
hi += 1
return possibilities
t1 = time.time()
separations = separate([2, 3, 3, 5])
print time.time()-t1
sepmap = {True:"|", False:""}
for a in separations:
for b in a:
if sepmap.has_key(b):
print sepmap[b],
else:
print b,
print "\n",
It's based on the QuickPerm algorithm, which you can read more about here: QuickPerm
Basically, my code generates a list containing n separations, inserts them into the given list and then finds all possible permutations of the separations in the list.
So, if we use your example we would get:
2 3 3 5
2 | 3 3 5
2 3 | 3 5
2 3 3 | 5
2 | 3 | 3 5
2 3 | 3 | 5
2 | 3 3 | 5
2 | 3 | 3 | 5
In 0.000154972076416 seconds.
However, I read through the problem description of the problem you are doing and I see how you are trying to solve this, but seeing how quickly the runtime increases I don't think that it would work as fast you would expect. Remember that Project Euler's problems should solve in around a minute.

Resources