Problem: there are 2 parallel arrays of positive values A and B of size n.
How to find the minimal value for the following target function:
F(A, B) = Ak + Bk * F(A', B')
where A', B' denote the arrays A and B with their k:th element removed.
I was thinking about dynamic programming approach, but with no success.
How to apply on such kind of problems, where we need to evaluate given function on a permutation?
The optimal solution is to calculate (B_k - 1)/A_k and do those with smaller (including more negative) results on the most outside position of the recursion.
This is locally optimal in that you cannot swap a pair of adjacent choices and improve, and therefore globally optimal, since the algorithm gives a unique solution apart from equal values of (B_k-1)/A_k, which make no difference. Any other solution which does not have this property is not optimal.
If we compare A_1+B_1*(A_2+B_2*F) with A_2+B_2*(A_1+B_1*F) then the former will be smaller (or equal) iff
A_1 + B_1*(A_2 + B_2*F) <= A_2 + B_2*(A_1 + B_1*F)
A_1 + B_1*A_2 + B_1*B_2*F <= A_2 + B_2*A_1 + B_2*B_1*F
B_1*A_2 - A_2 <= B_2*A_1 - A_1
(B_1 - 1)/A_1 <= (B_2 - 1)/A_2
noting A_k > 0.
The value of the empty F(,) does not matter, as it appears in the end multiplied by all the B_k.
I've come up with a heuristic. Too bad it is not optimal (thanks yi_H!) =(
At first, I thought that starting with increasing values of A_i. However, counterexamples remained (A={1000, 900} and B={0.1, 0.5}) So I came up with this :
For each value of i in [1..n], compute V_i = A_i + B_i*min(A_j) for j!=i
Choose i such that V_i is the smallest among all the V values. Remove A_i and B_i from A and B. These are the two first terms.
Repeat with A' and B' until the end (until both are empty).
The algorithm is O(n^2) if you memorize the V_i and update them, otherwise it's O(n^3) for a naive implementation.
Edit : Congrats for yi_H for finding counter-examples showing why this is non optimal!
Not a solution, but a likely heuristic. Looking at F(A, B) = Ak + Bk * F(A', B') it seems pretty obvious that F(A', B') is going to be larger that Ak or Bk. Hence, because of the multiplication we should pick Bk to be as small as possible, which will give us a value of k and hence a possible smallest F(A, B) when we calculate it out. If there is more than one smallest Bk we can calculate them all and pick the smallest.
We can then start a brute force algorithm ploughing through all the possible results, but we already have a likely smallest, so we can terminate early if our current trial is going to give us a result larger than we already have.
It's not effectively [ O(2^n * n) ] but should works and better than O(n! *n) as in comments
int n;
double[n] a,b; //global
double[1<<n] pres; //0's on startup. res is never 0
//Try to calculate this function if only elements in mask are used.
double res(int mask){
if(pres[mask]!=0) // do not recalc. it's lazy DP
return pres[mask];
if(!mask)
return pres[mask]=1; //F(empty) you should replace for your default value
double pres[mask]=INF; //INF > any result
for(int i=0;i<n;++i){
if(mask & (1<<i)){
//i-th elemnent not used not used
pres[mask]=min(min_value,a[k]+b[k]*res(mask-(1<<i));
//try to delete it recursively and check minimum for all elements
}
}
return pres[mask];
}
double ans=res((1<<n)-1); //get res for all array
You can code it without recursion:
res[0]=1; //F(empty)
for(int mask=1;mask<1<<n;++mask){
res[mask]=INF;
for(int i=0;(1<<i)<=mask;++i){
if(mask & (1<<i)){
pres[mask]=min(min_value,a[k]+b[k]*res[mask-(1<<i)];
}
}
}
//use res[(1<<n)-1]
PS: I use that all elements are positive i.e a<b && c<d => ac<bd
I have a loop which tries every combination (N^2) of two element in the list and tries to swap them. If the result (I'm evaluating with k=1) got better, it starts from the beginning.
Seems to be working for N<=10, might be good for larger N as well, but I can't really test because the verifier is the brute force O(N!) algorithm :D Also, I have no idea how fast it converges for large Ns.
Tried randomized algorithm which picks the swap positions randomly and stops after X unsuccessfull tries... it rarely finds the best solution.
Update:
Running in python:
N=40 N=50 N=60
2.8s 5.3s 8.4s (starting point: not sorted)
1.7s 2.8s 4.4s (sort on a first)
1.2s 2.2s 4.3s (sort on b first)
0.8s 1.9s 2.5s (using Fezvez's algorithm as a starting point)
All measurements contain the running time of pre-sort (the 4th one Fezvez's algorithm). If anybody thinks his solution gets close to the optimal please let me know, I'll test it.
Update2:
My algo restared the search after an improvement which was kinda dumb.. I don't want to rerun all test, here is some new data (still can't verify the results, you have to come up with an algorithm which does better..:)) Now with Fezvez+swap improvement:
N=100: 1.0s N=150: 3.1s N=200: 7.0s
Some imporevement stats (N=200, uniform dist.: A: [1, 1000], B: [0.1,0.9])
Fezvez improvemenent
38.172841 36.764499
13.809364 13.805913
27.287438 26.389688
45.101368 40.364930
14.623132 14.599037
33.060609 31.298794
This solution is not optimal, only practical. But I'm afraid this is a hard problem. In the meantime, the following should get you a good permutation.
Since b_k < 1, choosing as the permutation the one which makes the a_k increasing is a good starting point.
You can try simulated annealing from this initial guess. Random transpositions as state transitions should be OK.
Related
I was given a tricky question.
Given:
A = [a1,a2,...an] (list of positive integers with length "n")
r (positive integer)
Find a list of { *, + } operators
O = [o1,o2,...on-1]
so that if we placed those operators between the elements of "A", the resulting expression would evaluate to "r". Only one solution is required.
So for example if
A = [1,2,3,4]
r = 14
then
O = [*, +, *]
I've implemented a simple recursive solution with some optimisation, but of course it's exponential O(2^n) time, so for an input with length 40, it works for ages.
I wanted to ask if any of you know a sub-exponential solution for this?
Update
Elements of A are between 0-10000,
r can be arbitrarily big
Let A and B be positive integers. Then A + B ≤ A × B + 1.
This little fact can be used to construct a very efficient algorithm.
Let's define a graph. The graph nodes correspond to operations lists, for example, [+, ×, +, +, ×]. There is an edge from graph node X to graph node Y if the Y can be obtained by changing a single + to a × in X. The graph has a source at the node corresponding to [+, +, ..., +].
Now perform a breadth-first search from the source node, constructing the graph as you go. When expanding a node [+, ×, +, +, ×], for example, you (optionally construct then) connect to the nodes [×, ×, +, +, ×], [+, ×, ×, +, ×], and [+, ×, +, ×, ×]. Do not expand to a node if the result of evaluating it is greater than r + k(O), where k(O) is the number of +'s in the operation list O. This is because of the "+ 1" in the fact at the beginning of the answer - consider the case of a = [1, 1, 1, 1, 1], r = 1.
This approach uses O(n 2n) time and O(2n) space (where both are potentially very-loose worst case bounds). This is still an exponential algorithm, however I think you will find it performs very reasonably for non-sinister inputs. (I suspect this problem is NP-complete, which is why I am happy with this "non-sinister inputs" escape clause.)
Here's an O(rn^2)-time, O(rn)-space DP approach. If r << 2^n then this will have better worst-case behaviour than exponential-time branch-and-bound approaches, though even then the latter may still be faster on many instances. This is pseudo-polynomial time, because it takes time proportional to the value of part of its input (r), not its size (which would be log2(r)). Specifically it needs rn bits of memory, so it should give answers in a few seconds for up to around rn < 1,000,000,000 and n < 1000 (e.g. n = 100, r = 10,000,000).
The key observation is that any formula involving all n numbers has a final term that consists of some number i of factors, where 1 <= i <= n. That is, any formula must be in one of the following n cases:
(a formula on the first n-1 terms) + a[n]
(a formula on the first n-2 terms) + a[n-1] * a[n]
(a formula on the first n-3 terms) + a[n-2] * a[n-1] * a[n]
...
a[1] * a[2] * ... * a[n]
Let's call the "prefix" of a[] consisting of the first i numbers P[i]. If we record, for each 0 <= i <= n-1, the complete set of values <= r that can be reached by some formula on P[i], then based on the above, we can quite easily compute the complete set of values <= r that can be reached by P[n]. Specifically, let X[i][j] be a true or false value that indicates whether the prefix P[i] can achieve the value j. (X[][] could be stored as an array of n size-(r+1) bitmaps.) Then what we want to do is compute X[n][r], which will be true if r can be reached by some formula on a[], and false otherwise. (X[n][r] isn't quite the full answer yet, but it can be used to get the answer.)
X[1][a[1]] = true. X[1][j] = false for all other j. For any 2 <= i <= n and 0 <= j <= r, we can compute X[i][j] using
X[i][j] = X[i - 1][j - a[i]] ||
X[i - 2][j - a[i-1]*a[i]] ||
X[i - 3][j - a[i-2]*a[i-1]*a[i]] ||
... ||
X[1][j - a[2]*a[3]*...*a[i]] ||
(a[1]*a[2]*...*a[i] == j)
Note that the last line is an equality test that compares the product of all i numbers in P[i] to j, and returns true or false. There are i <= n "terms" (rows) in the expression for X[i][j], each of which can be computed in constant time (note in particular that the multiplications can be built up in constant time per row), so computing a single value X[i][j] can be done in O(n) time. To find X[n][r], we need to calculate X[i][j] for every 1 <= i <= n and every 0 <= j <= r, so there is O(rn^2) overall work to do. (Strictly speaking we may not need to compute all of these table entries if we use memoization instead of a bottom-up approach, but many inputs will require us to compute a large fraction of them anyway, so it's likely that the latter is faster by a small constant factor. Also a memoization approach requires keeping an "already processed" flag for each DP cell -- which doubles the memory usage when each cell is just 1 bit!)
Reconstructing a solution
If X[n][r] is true, then the problem has a solution (satisfying formula), and we can reconstruct one in O(n^2) time by tracing back through the DP table, starting from X[n][r], at each location looking for any term that enabled the current location to assume the value "true" -- that is, any true term. (We could do this reconstruction step faster by storing more than a single bit per (i, j) combination -- but since r is allowed to be "arbitrarily big", and this faster reconstruction won't improve the overall time complexity, it probably makes more sense to go with the approach that uses the fewest bits per DP table entry.) All satisfying solutions can be reconstructed this way, by backtracking through all true terms instead of just picking any one -- but there may be an exponential number of them.
Speedups
There are two ways that calculation of an individual X[i][j] value can be sped up. First, because all the terms are combined with ||, we can stop as soon as the result becomes true, since no later term can make it false again. Second, if there is no zero anywhere to the left of i, we can stop as soon as the product of the final numbers becomes larger than r, since there's no way for that product to be decreased again.
When there are no zeroes in a[], that second optimisation is likely to be very important in practice: it has the potential to make the inner loop much smaller than the full i-1 iterations. In fact if a[] contains no zeroes, and its average value is v, then after k terms have been computed for a particular X[i][j] value the product will be around v^k -- so on average, the number of inner loop iterations (terms) needed drops from n to log_v(r) = log(r)/log(v). That might be much smaller than n, in which case the average time complexity for this model drops to O(rn*log(r)/log(v)).
[EDIT: We actually can save multiplications with the following optimisation :)]
8/32/64 X[i][j]s at a time: X[i][j] is independent of X[i][k] for k != j, so if we are using bitsets to store these values, we can calculate 8, 32 or 64 of them (or maybe more, with SSE2 etc.) in parallel using simple bitwise OR operations. That is, we can calculate the first term of X[i][j], X[i][j+1], ..., X[i][j+31] in parallel, OR them into the results, then calculate their second terms in parallel and OR them in, etc. We still need to perform the same number of subtractions this way, but the products are all the same, so we can reduce the number of multiplications by a factor of 8/32/64 -- as well as, of course, the number of memory accesses. OTOH, this makes the first optimisation from the previous paragraph harder to accomplish -- you have to wait until an entire block of 8/32/64 bits have become true before you can stop iterating.
Zeroes: Zeroes in a[] may allow us to stop early. Specifically, if we have just computed X[i][r] for some i < n and found it to be true, and there is a zero anywhere to the right of position i in a[], then we can stop: we already have a formula on the first i numbers that evaluates to r, and we can use that zero to "kill off" all numbers to the right of position i by creating one big product term that includes all of them.
Ones: An interesting property of any a[] entry containing the value 1 is that it can be moved to any other position in a[] without affecting whether or not there is a solution. This is because every satisfying formula either has a * on at least one side of this 1, in which case it multiplies some other term and has no effect there, and would likewise have no effect anywhere else; or it has a + on both sides (imagine extra + signs before the first position and after the last), in which case it might as well be added in anywhere.
So, we can safely shunt all 1 values to the end of a[] before doing anything else. The point of doing this is that now we don't have to evaluate these rows of X[][] at all, because they only influence the outcome in a very simple way. Suppose there are m < n ones in a[], which we have moved to the end. Then after computing the m+1 values X[n-m][r-m], X[n-m][r-m+1], X[n-m][r-m+2], ..., X[n-m][r], we already know what X[n][r] must be: if any of them are true, then X[n][r] must be true, otherwise (if they are all false) it must be false. This is because the final m ones can add anywhere from 0 up to m to a formula on the first n-m values. (But if a[] consists entirely of 1s, then at least 1 must be "added" -- they can't all multiply some other term.)
Here is another approach that might be helpful. It is sometimes known as a "meet-in-the-middle" algorithm and runs in O(n * 2^(n/2)). The basic idea is this. Suppose n = 40 and you know that the middle slot is a +. Then, you can brute force all N := 2^20 possibilities for each side. Let A be a length N array storing the possible values of the left side, and similarly let B be a length N array storing the values for the right side.
Then, after sorting A and B, it is not hard to efficiently check for whether any two of them sum to r (e.g. for each value in A, do a binary search on B, or you can even do it in linear time if both arrays are sorted). This part takes O(N * log N) = O(n * 2^(n/2)) time.
Now, this was all assuming the middle slot is a +. If not, then it has to be a *, and you can combine the middle two elements into one (their product), reducing the problem to n = 39. Then you try the same thing, and so on. If you analyze it carefully, you should get O(n * 2^(n/2)) as the asymptotic complexity, since actually the largest term dominates.
You need to do some bookkeeping to actually recover the +'s and *'s, which I have left out to simplify the explanation.
Let S be a set of intervals (containing n number of intervals) of the natural numbers that might overlap and N be a list of numbers (containing n number of numbers).
I want to find the smallest subset (let's call P) of S such that for each number
in our list N, there exists at least one interval in P that contains it. The intervals in P are allowed to overlap.
Trivial example:
S = {[1..4], [2..7], [3..5], [8..15], [9..13]}
N = [1, 4, 5]
// so P = {[1..4], [2..7]}
I think a dynamic algorithm might not work always, so if anybody knows of a solution to this problem (or a similar one that can be converted into), that would be great. I am trying to make a O(n^2 solution)
Here is one greedy approach
P = {}
for each q in N: // O(n)
if q in P // O(n)
continue
for each i in S // O(n)
if q in I: // O(n)
P.add(i)
break
But that is O(n^4).. Any help with creating a greedy approach that is O(n^2) would be great!
Thanks!
* Update: * I've been slamming at this problem and I think I have an O(n^2) solution!!
Let me know if you think I'm right!!!
N = MergeSort (N)
upper, lower = infinity, -1
P = empty set
for each q in N do
if (q>=lower and q<=upper)=False
max_interval = [-infinity, infinity]
for each r in S do
if q in r then
if r.rightEndPoint > max_interval.rightEndPoint
max_interval = r
P.append(max_interval)
lower = max_interval.leftEndPoint
upper = max_interval.rightEndPoint
S.remove(max_interval)
I think this should work!! I'm trying to find a counter solution; but yeah!!
This problem is similar to set cover problem, which is NP-complete (i.e., arguably has no solution faster than exponential). What makes it different is that intervals always cover adjacent elements (not arbitrary subset of N), which opens ways for faster solutions.
http://en.wikipedia.org/wiki/Set_cover_problem
I think that the solution proposed by Mike is good enough. But I think I have quite straightforward O(N^2) greedy algo. It starts like the Mike's one (moreover, I believe Mike's solution can also be improved in similar way):
You sort your N numbers and place them sorted into array ELEM; COMPLEXITY O(N*lg N);
Using binary search, for each interval S[i] you identify starting and ending index of elements in ELEM that are covered by S[i]. Say, you place this pair of numbers into array COVER, the difference between the two indices tells you how many elements you cover, for simplicity, let us place it array COVER_COUNT; COMPLEXITY O(N*lg N);
You introduce index pointer p, that shows till which element in ELEM, your N is already covered. you set p = 0, meaning that all elements up to 0-th (excluded) are initially covered (i.e., no elements); Complexity O(1). Moreover you introduce boolean array IS_INCLUDED, that reflects if interval S[i] is already included in your coverage set. Complexity O(N)
Then you start from the 0-th element in ELEM and see what is the interval that contains ELEM[0] and has greater coverage COVER_COUNT[i]. Imagine that it is i-th interval. We then mark it as included by setting IS_INCLUDED[i] to true. Then you set p to end[i] + 1 where end[i] is the ending index in COVER[i] pair (indeed now all elements til end[i] are covered). Then, knowing p you update all elements in COVER_COUNT so that they reflect how many elements of not yet covered elements each interval covers (this can be easily done in O(N) time). Then you perform the same step for ELEM[p] and continues till p >= ELEM.length. It can be observed that the overall complexity is O(N^2).
You finish in O(n^2) and in IS_INCLUDED has true for intervals of S included in optimal cover set
Let me know if this solution seems reasonable to you and if I calculated everything well.
P.S. Just wanted to add that the optimality of ythe solution found by algo can be proved by induction and contradiction. By contradiction, it is easy to show that at least one optimal solution includes the longest interval of those covering element ELEM[0]. If so, by induction we can show that for each next element in algo, we can keep on following the strategy of selelcting the interval that is the longest with respect to the number of remaining elements covered and that covers the leftmost yet uncovered element.
I am not sure, but mb some think like this.
1) For each interval create a list with elements from N witch contain in interval, it will take O(n^2) lets call it Q[i] for S[i]
2) Then sort our S by length of Q[i], O(n*lg(n))
3) Go throw this array excluding Q[i] from N O(n) and from Q[i+1]...Q[n] = O(n^2)
4) Repeat 2 while N is not empty.
It's not O(n^2), it's O(n^3) but if you can use hashmap, i think you can improve this.
I'm trying to solve distance transform problem (using Manhattan's distance). Basically, giving matrix with 0's and 1's, program must assign distances of every position to nearest 1. For example, for this one
0000
0100
0000
0000
distance transform matrix is
2123
1012
2123
3234
Possible solutions from my head are:
Slowest ones (slowest because I have tried to implement them - they were lagging on very big matrices):
Brute-force - for every 1 that program reads, change distances accordingly from beginning till end.
Breadth-first search from 0's - for every 0, program looks for nearest 1 inside out.
Same as 2 but starting from 1's mark every distance inside out.
Much faster (read from other people's code)
Breadth-first search from 1's
1. Assign all values in the distance matrix to -1 or very big value.
2. While reading matrix, put all positions of 1's into queue.
3. While queue is not empty
a. Dequeue position - let it be x
b. For each position around x (that has distance 1 from it)
if position is valid (does not exceed matrix dimensions) then
if distance is not initialized or is greater than (distance of x) + 1 then
I. distance = (distance of x) + 1
II. enqueue position into queue
I wanted to ask if there is faster solution to that problem. I tried to search algorithms for distance transform but most of them are dealing with Euclidean distances.
Thanks in advance.
The breadth first search would perform Θ(n*m) operations where n and m are the width and height of your matrix.
You need to output Θ(n*m) numbers, so you can't get any faster than that from a theoretical point of view.
I'm assuming you are not interested in going towards discussions involving cache and such optimizations.
Note that this solution works in more interesting cases. For example, imagine the same question, but there could be different "sources":
00000
01000
00000
00000
00010
Using BFS, you will get the following distance-to-closest-source in the same time complexity:
21234
10123
21223
32212
32101
However, with a single source, there is another solution that might have a slightly better performance in practice (even though the complexity is still the same).
Before, let's observe the following property.
Property: If source is at (a, b), then a point (x, y) has the following manhattan distance:
d(x, y) = abs(x - a) + abs(y - b)
This should be quite easy to prove. So another algorithm would be:
for r in rows
for c in cols
d(r, c) = abc(r - a) + abs(c - b)
which is very short and easy.
Unless you write and test it, there is no easy way of comparing the two algorithms. Assuming an efficient bounded queue implementation (with an array), you have the following major operations per cell:
BFS: queue insertion/deletion, visit of each node 5 times (four times by neighbors, and one time out of the queue)
Direct formula: two subtraction and two ifs
It would really depend on the compiler and its optimizations as well as the specific CPU and memory architecture to say which would perform better.
That said, I'd advise for going with whichever seems simpler to you. Note however that with multiple sources, in the second solution you would need multiple passes on the array (or multiple distance calculations in one pass) and that would definitely have a worse performance than BFS for a large enough number of sources.
You don't need a queue or anything like that at all. Notice that if (i,j) is at distance d from (k,l), one way to realise that distance is to go left or right |i-k| times and then up or down |j-l| times.
So, initialise your matrix with big numbers and stick a zero everywhere you have a 1 in your input. Now do something like this:
for (i = 0; i < sx-1; i++) {
for (j = 0; j < sy-1; j++) {
dist[i+1][j] = min(dist[i+1][j], dist[i][j]+1);
dist[i][j+1] = min(dist[i][j+1], dist[i][j]+1);
}
dist[i][sy-1] = min(dist[i][sy-1], dist[i][sy-2]+1);
}
for (j = 0; j < sy-1; j++) {
dist[sx-1][j] = min(dist[sx-1][j], dist[sx-2][j]+1);
}
At this point, you've found all of the shortest paths that involve only going down or right. If you do a similar thing for going up and left, dist[i][j] will give you the distance from (i, j) to the nearest 1 in your input matrix.
I am going to implement a Farey fraction approximation for converting limited-precision user input into possibly-repeating rationals.
http://mathworld.wolfram.com/FareySequence.html
I can easily locate the closest Farey fraction in a sequence, and I can find Fn by recursively searching for mediant fractions by building the Stern-Brocot tree.
http://mathworld.wolfram.com/Stern-BrocotTree.html
However, the method I've come up with for finding the fractions in the sequence Fn seems very inefficient:
(pseudo)
For int i = 0 to fractions.count -2
{
if fractions[i].denominator + fractions[i+1].denominator < n
{
insert new fraction(
numerator = fractions[i].numerator + fractions[i+1].numerator
,denominator = fractions[i].denominator + fractions[i+1].denominator)
//note that fraction will reduce itself
addedAnElement = true
}
}
if addedAnElement
repeat
I will almost always be defining the sequence Fn where n = 10^m where m >1
So perhaps it might be best to build the sequence one time and cache it... but it still seems like there should be a better way to derive it.
EDIT:
This paper has a promising algorithm:
http://www.math.harvard.edu/~corina/publications/farey.pdf
I will try to implement.
The trouble is that their "most efficient" algorithm requires knowing the prior two elements. I know element one of any sequence is 1/n but finding the second element seems a challenge...
EDIT2:
I'm not sure how I overlooked this:
Given F0 = 1/n
If x > 2 then
F1 = 1/(n-1)
Therefore for all n > 2, the first two fractions will always be
1/n, 1/(n-1) and I can implement the solution from Patrascu.
So now, we the answer to this question should prove that this solution is or isn't optimal using benchmarks..
Why do you need the Farey series at all? Using continued fractions would give you the same approximation online without precalculating the series.
Neighboring fractions in Farey sequences are described in Sec. 3 of Neighboring Fractions in Farey Subsequences, http://arxiv.org/abs/0801.1981 .
The famous Fisher-Yates shuffle algorithm can be used to randomly permute an array A of length N:
For k = 1 to N
Pick a random integer j from k to N
Swap A[k] and A[j]
A common mistake that I've been told over and over again not to make is this:
For k = 1 to N
Pick a random integer j from 1 to N
Swap A[k] and A[j]
That is, instead of picking a random integer from k to N, you pick a random integer from 1 to N.
What happens if you make this mistake? I know that the resulting permutation isn't uniformly distributed, but I don't know what guarantees there are on what the resulting distribution will be. In particular, does anyone have an expression for the probability distributions over the final positions of the elements?
An Empirical Approach.
Let's implement the erroneous algorithm in Mathematica:
p = 10; (* Range *)
s = {}
For[l = 1, l <= 30000, l++, (*Iterations*)
a = Range[p];
For[k = 1, k <= p, k++,
i = RandomInteger[{1, p}];
temp = a[[k]];
a[[k]] = a[[i]];
a[[i]] = temp
];
AppendTo[s, a];
]
Now get the number of times each integer is in each position:
r = SortBy[#, #[[1]] &] & /# Tally /# Transpose[s]
Let's take three positions in the resulting arrays and plot the frequency distribution for each integer in that position:
For position 1 the freq distribution is:
For position 5 (middle)
And for position 10 (last):
and here you have the distribution for all positions plotted together:
Here you have a better statistics over 8 positions:
Some observations:
For all positions the probability of
"1" is the same (1/n).
The probability matrix is symmetrical
with respect to the big anti-diagonal
So, the probability for any number in the last
position is also uniform (1/n)
You may visualize those properties looking at the starting of all lines from the same point (first property) and the last horizontal line (third property).
The second property can be seen from the following matrix representation example, where the rows are the positions, the columns are the occupant number, and the color represents the experimental probability:
For a 100x100 matrix:
Edit
Just for fun, I calculated the exact formula for the second diagonal element (the first is 1/n). The rest can be done, but it's a lot of work.
h[n_] := (n-1)/n^2 + (n-1)^(n-2) n^(-n)
Values verified from n=3 to 6 ( {8/27, 57/256, 564/3125, 7105/46656} )
Edit
Working out a little the general explicit calculation in #wnoise answer, we can get a little more info.
Replacing 1/n by p[n], so the calculations are hold unevaluated, we get for example for the first part of the matrix with n=7 (click to see a bigger image):
Which, after comparing with results for other values of n, let us identify some known integer sequences in the matrix:
{{ 1/n, 1/n , ...},
{... .., A007318, ....},
{... .., ... ..., ..},
... ....,
{A129687, ... ... ... ... ... ... ..},
{A131084, A028326 ... ... ... ... ..},
{A028326, A131084 , A129687 ... ....}}
You may find those sequences (in some cases with different signs) in the wonderful http://oeis.org/
Solving the general problem is more difficult, but I hope this is a start
The "common mistake" you mention is shuffling by random transpositions. This problem was studied in full detail by Diaconis and Shahshahani in Generating a random permutation with random transpositions (1981). They do a complete analysis of stopping times and convergence to uniformity. If you cannot get a link to the paper, then please send me an e-mail and I can forward you a copy. It's actually a fun read (as are most of Persi Diaconis's papers).
If the array has repeated entries, then the problem is slightly different. As a shameless plug, this more general problem is addressed by myself, Diaconis and Soundararajan in Appendix B of A Rule of Thumb for Riffle Shuffling (2011).
Let's say
a = 1/N
b = 1-a
Bi(k) is the probability matrix after i swaps for the kth element. i.e the answer to the question "where is k after i swaps?". For example B0(3) = (0 0 1 0 ... 0) and B1(3) = (a 0 b 0 ... 0). What you want is BN(k) for every k.
Ki is an NxN matrix with 1s in the i-th column and i-th row, zeroes everywhere else, e.g:
Ii is the identity matrix but with the element x=y=i zeroed. E.g for i=2:
Ai is
Then,
But because BN(k=1..N) forms the identity matrix, the probability that any given element i will at the end be at position j is given by the matrix element (i,j) of the matrix:
For example, for N=4:
As a diagram for N = 500 (color levels are 100*probability):
The pattern is the same for all N>2:
The most probable ending position for k-th element is k-1.
The least probable ending position is k for k < N*ln(2), position 1 otherwise
I knew I had seen this question before...
" why does this simple shuffle algorithm produce biased results? what is a simple reason? " has a lot of good stuff in the answers, especially a link to a blog by Jeff Atwood on Coding Horror.
As you may have already guessed, based on the answer by #belisarius, the exact distribution is highly dependent on the number of elements to be shuffled. Here's Atwood's plot for a 6-element deck:
What a lovely question! I wish I had a full answer.
Fisher-Yates is nice to analyze because once it decides on the first element, it leaves it alone. The biased one can repeatedly swap an element in and out of any place.
We can analyze this the same way we would a Markov chain, by describing the actions as stochastic transition matrices acting linearly on probability distributions. Most elements get left alone, the diagonal is usually (n-1)/n. On pass k, when they don't get left alone, they get swapped with element k, (or a random element if they are element k). This is 1/(n-1) in either row or column k. The element in both row and column k is also 1/(n-1). It's easy enough to multiply these matrices together for k going from 1 to n.
We do know that the element in last place will be equally likely to have originally been anywhere because the last pass swaps the last place equally likely with any other. Similarly, the first element will be equally likely to be placed anywhere. This symmetry is because the transpose reverses the order of matrix multiplication. In fact, the matrix is symmetric in the sense that row i is the same as column (n+1 - i). Beyond that, the numbers don't show much apparent pattern. These exact solutions do show agreement with the simulations run by belisarius: In slot i, The probability of getting j decreases as j raises to i, reaching its lowest value at i-1, and then jumping up to its highest value at i, and decreasing until j reaches n.
In Mathematica I generated each step with
step[k_, n_] := Normal[SparseArray[{{k, i_} -> 1/n,
{j_, k} -> 1/n, {i_, i_} -> (n - 1)/n} , {n, n}]]
(I haven't found it documented anywhere, but the first matching rule is used.)
The final transition matrix can be calculated with:
Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]]
ListDensityPlot is a useful visualization tool.
Edit (by belisarius)
Just a confirmation. The following code gives the same matrix as in #Eelvex's answer:
step[k_, n_] := Normal[SparseArray[{{k, i_} -> (1/n),
{j_, k} -> (1/n), {i_, i_} -> ((n - 1)/n)}, {n, n}]];
r[n_, s_] := Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]];
Last#Table[r[4, i], {i, 1, 4}] // MatrixForm
Wikipedia's page on the Fisher-Yates shuffle has a description and example of exactly what will happen in that case.
You can compute the distribution using stochastic matrices. Let the matrix A(i,j) describe the probability of the card originally at position i ending up in position j. Then the kth swap has a matrix Ak given by Ak(i,j) = 1/N if i == k or j == k, (the card in position k can end up anywhere and any card can end up at position k with equal probability), Ak(i,i) = (N - 1)/N for all i != k (every other card will stay in the same place with probability (N-1)/N) and all other elements zero.
The result of the complete shuffle is then given by the product of the matrices AN ... A1.
I expect you're looking for an algebraic description of the probabilities; you can get one by expanding out the above matrix product, but it I imagine it will be fairly complex!
UPDATE: I just spotted wnoise's equivalent answer above! oops...
I've looked into this further, and it turns out that this distribution has been studied at length. The reason it's of interest is because this "broken" algorithm is (or was) used in the RSA chip system.
In Shuffling by semi-random transpositions, Elchanan Mossel, Yuval Peres, and Alistair Sinclair study this and a more general class of shuffles. The upshot of that paper appears to be that it takes log(n) broken shuffles to achieve near random distribution.
In The bias of three pseudorandom shuffles (Aequationes Mathematicae, 22, 1981, 268-292), Ethan Bolker and David Robbins analyze this shuffle and determine that the total variation distance to uniformity after a single pass is 1, indicating that it is not very random at all. They give asympotic analyses as well.
Finally, Laurent Saloff-Coste and Jessica Zuniga found a nice upper bound in their study of inhomogeneous Markov chains.
This question is begging for an interactive visual matrix diagram analysis of the broken shuffle mentioned. Such a tool is on the page Will It Shuffle? - Why random comparators are bad by Mike Bostock.
Bostock has put together an excellent tool that analyzes random comparators. In the dropdown on that page, choose naïve swap (random ↦ random) to see the broken algorithm and the pattern it produces.
His page is informative as it allows one to see the immediate effects a change in logic has on the shuffled data. For example:
This matrix diagram using a non-uniform and very-biased shuffle is produced using a naïve swap (we pick from "1 to N") with code like this:
function shuffle(array) {
var n = array.length, i = -1, j;
while (++i < n) {
j = Math.floor(Math.random() * n);
t = array[j];
array[j] = array[i];
array[i] = t;
}
}
But if we implement a non-biased shuffle, where we pick from "k to N" we should see a diagram like this:
where the distribution is uniform, and is produced from code such as:
function FisherYatesDurstenfeldKnuthshuffle( array ) {
var pickIndex, arrayPosition = array.length;
while( --arrayPosition ) {
pickIndex = Math.floor( Math.random() * ( arrayPosition + 1 ) );
array[ pickIndex ] = [ array[ arrayPosition ], array[ arrayPosition ] = array[ pickIndex ] ][ 0 ];
}
}
The excellent answers given so far are concentrating on the distribution, but you have asked also "What happens if you make this mistake?" - which is what I haven't seen answered yet, so I'll give an explanation on this:
The Knuth-Fisher-Yates shuffle algorithm picks 1 out of n elements, then 1 out of n-1 remaining elements and so forth.
You can implement it with two arrays a1 and a2 where you remove one element from a1 and insert it into a2, but the algorithm does it in place (which means, that it needs only one array), as is explained here (Google: "Shuffling Algorithms Fisher-Yates DataGenetics") very well.
If you don't remove the elements, they can be randomly chosen again which produces the biased randomness. This is exactly what the 2nd example your are describing does. The first example, the Knuth-Fisher-Yates algorithm, uses a cursor variable running from k to N, which remembers which elements have already been taken, hence avoiding to pick elements more than once.