This is the sample code I've found for calculating the determinant of an (nxn) matrix and it works fine and all but I'm having trouble understanding what's happening in the conversion to triangular form section. Can someone explain what's happening in the "Conversion to upper triangular part"?
I don't have trouble calculating the determinant on my own or doing any upper triangular form conversions but I just don't get how this all translates here in this program.
ii) What is happening with integers (i,j,k,l)? Specifically, what is k and l doing? What is happening inside the IF construct? For a matrix A, I know that something like A(i,j) indicates its position in the matrix and that's all I've ever needed for any matrix programs I've worked with in the past.
========================================================================
!Function to find the determinant of a square matrix
!Description: The subroutine is based on two key points:
!1] A determinant is unaltered when row operations are performed: Hence,
using this principle,
!row operations (column operations would work as well) are used
!to convert the matrix into upper traingular form
!2]The determinant of a triangular matrix is obtained by finding the
product of the diagonal elements
REAL FUNCTION FindDet(matrix, n)
IMPLICIT NONE
REAL, DIMENSION(n,n) :: matrix
INTEGER, INTENT(IN) :: n
REAL :: m, temp
INTEGER :: i, j, k, l
LOGICAL :: DetExists = .TRUE.
l = 1
!Convert to upper triangular form
DO k = 1, n-1
IF (matrix(k,k) == 0) THEN
DetExists = .FALSE.
DO i = k+1, n
IF (matrix(i,k) /= 0) THEN
DO j = 1, n
temp = matrix(i,j)
matrix(i,j)= matrix(k,j)
matrix(k,j) = temp
END DO
DetExists = .TRUE.
l=-l
EXIT
ENDIF
END DO
IF (DetExists .EQV. .FALSE.) THEN
FindDet = 0
return
END IF
ENDIF
DO j = k+1, n
m = matrix(j,k)/matrix(k,k)
DO i = k+1, n
matrix(j,i) = matrix(j,i) - m*matrix(k,i)
END DO
END DO
END DO
!Calculate determinant by finding product of diagonal elements
FindDet = l
DO i = 1, n
FindDet = FindDet * matrix(i,i)
END DO
END FUNCTION FindDet
In this implementation, you can interpret the indices i, j, k, l as the following:
i and j: will be used interchangeably (there is no consistence in the code in this regard) to represent the row and column coordinates of an element of the matrix.
k: will iterate over the "dimension" of the matrix, without meaning a coordinated position. Or, in a different abstraction, iterates over the diagonal.
l: will be +1 or -1, alternating its value anytime the algorithm performs a line-switch. It takes into account that switching any two rows of a matrix reverses the sign of its determinant.
So, the interpretation of the code is:
At each iteration over the dimension of the matrix:
First, check if this diagonal element is zero. If it is zero:
ALARM: maybe the matrix is degenerate.
Let's find it out. Iterate downwards over the rest of this row, trying to find a non-zero element.
If you find a row with a non-zero element in this column, if was a false alarm. Perform row-switch and invert the sign of the determinant. Go on.
If there were all zeroes, then the matrix is degenerate. There is nothing else to do, determinant is zero.
Going on. For each row below this diagonal:
Perform sums and subtractions on the rest of the rows, constructing the diagonal matrix.
Finally, calculate the determinant by multiplying all the elements on the diagonal, taking the sign changes into acount.
Related
The famous Fisher-Yates shuffle algorithm can be used to randomly permute an array A of length N:
For k = 1 to N
Pick a random integer j from k to N
Swap A[k] and A[j]
A common mistake that I've been told over and over again not to make is this:
For k = 1 to N
Pick a random integer j from 1 to N
Swap A[k] and A[j]
That is, instead of picking a random integer from k to N, you pick a random integer from 1 to N.
What happens if you make this mistake? I know that the resulting permutation isn't uniformly distributed, but I don't know what guarantees there are on what the resulting distribution will be. In particular, does anyone have an expression for the probability distributions over the final positions of the elements?
An Empirical Approach.
Let's implement the erroneous algorithm in Mathematica:
p = 10; (* Range *)
s = {}
For[l = 1, l <= 30000, l++, (*Iterations*)
a = Range[p];
For[k = 1, k <= p, k++,
i = RandomInteger[{1, p}];
temp = a[[k]];
a[[k]] = a[[i]];
a[[i]] = temp
];
AppendTo[s, a];
]
Now get the number of times each integer is in each position:
r = SortBy[#, #[[1]] &] & /# Tally /# Transpose[s]
Let's take three positions in the resulting arrays and plot the frequency distribution for each integer in that position:
For position 1 the freq distribution is:
For position 5 (middle)
And for position 10 (last):
and here you have the distribution for all positions plotted together:
Here you have a better statistics over 8 positions:
Some observations:
For all positions the probability of
"1" is the same (1/n).
The probability matrix is symmetrical
with respect to the big anti-diagonal
So, the probability for any number in the last
position is also uniform (1/n)
You may visualize those properties looking at the starting of all lines from the same point (first property) and the last horizontal line (third property).
The second property can be seen from the following matrix representation example, where the rows are the positions, the columns are the occupant number, and the color represents the experimental probability:
For a 100x100 matrix:
Edit
Just for fun, I calculated the exact formula for the second diagonal element (the first is 1/n). The rest can be done, but it's a lot of work.
h[n_] := (n-1)/n^2 + (n-1)^(n-2) n^(-n)
Values verified from n=3 to 6 ( {8/27, 57/256, 564/3125, 7105/46656} )
Edit
Working out a little the general explicit calculation in #wnoise answer, we can get a little more info.
Replacing 1/n by p[n], so the calculations are hold unevaluated, we get for example for the first part of the matrix with n=7 (click to see a bigger image):
Which, after comparing with results for other values of n, let us identify some known integer sequences in the matrix:
{{ 1/n, 1/n , ...},
{... .., A007318, ....},
{... .., ... ..., ..},
... ....,
{A129687, ... ... ... ... ... ... ..},
{A131084, A028326 ... ... ... ... ..},
{A028326, A131084 , A129687 ... ....}}
You may find those sequences (in some cases with different signs) in the wonderful http://oeis.org/
Solving the general problem is more difficult, but I hope this is a start
The "common mistake" you mention is shuffling by random transpositions. This problem was studied in full detail by Diaconis and Shahshahani in Generating a random permutation with random transpositions (1981). They do a complete analysis of stopping times and convergence to uniformity. If you cannot get a link to the paper, then please send me an e-mail and I can forward you a copy. It's actually a fun read (as are most of Persi Diaconis's papers).
If the array has repeated entries, then the problem is slightly different. As a shameless plug, this more general problem is addressed by myself, Diaconis and Soundararajan in Appendix B of A Rule of Thumb for Riffle Shuffling (2011).
Let's say
a = 1/N
b = 1-a
Bi(k) is the probability matrix after i swaps for the kth element. i.e the answer to the question "where is k after i swaps?". For example B0(3) = (0 0 1 0 ... 0) and B1(3) = (a 0 b 0 ... 0). What you want is BN(k) for every k.
Ki is an NxN matrix with 1s in the i-th column and i-th row, zeroes everywhere else, e.g:
Ii is the identity matrix but with the element x=y=i zeroed. E.g for i=2:
Ai is
Then,
But because BN(k=1..N) forms the identity matrix, the probability that any given element i will at the end be at position j is given by the matrix element (i,j) of the matrix:
For example, for N=4:
As a diagram for N = 500 (color levels are 100*probability):
The pattern is the same for all N>2:
The most probable ending position for k-th element is k-1.
The least probable ending position is k for k < N*ln(2), position 1 otherwise
I knew I had seen this question before...
" why does this simple shuffle algorithm produce biased results? what is a simple reason? " has a lot of good stuff in the answers, especially a link to a blog by Jeff Atwood on Coding Horror.
As you may have already guessed, based on the answer by #belisarius, the exact distribution is highly dependent on the number of elements to be shuffled. Here's Atwood's plot for a 6-element deck:
What a lovely question! I wish I had a full answer.
Fisher-Yates is nice to analyze because once it decides on the first element, it leaves it alone. The biased one can repeatedly swap an element in and out of any place.
We can analyze this the same way we would a Markov chain, by describing the actions as stochastic transition matrices acting linearly on probability distributions. Most elements get left alone, the diagonal is usually (n-1)/n. On pass k, when they don't get left alone, they get swapped with element k, (or a random element if they are element k). This is 1/(n-1) in either row or column k. The element in both row and column k is also 1/(n-1). It's easy enough to multiply these matrices together for k going from 1 to n.
We do know that the element in last place will be equally likely to have originally been anywhere because the last pass swaps the last place equally likely with any other. Similarly, the first element will be equally likely to be placed anywhere. This symmetry is because the transpose reverses the order of matrix multiplication. In fact, the matrix is symmetric in the sense that row i is the same as column (n+1 - i). Beyond that, the numbers don't show much apparent pattern. These exact solutions do show agreement with the simulations run by belisarius: In slot i, The probability of getting j decreases as j raises to i, reaching its lowest value at i-1, and then jumping up to its highest value at i, and decreasing until j reaches n.
In Mathematica I generated each step with
step[k_, n_] := Normal[SparseArray[{{k, i_} -> 1/n,
{j_, k} -> 1/n, {i_, i_} -> (n - 1)/n} , {n, n}]]
(I haven't found it documented anywhere, but the first matching rule is used.)
The final transition matrix can be calculated with:
Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]]
ListDensityPlot is a useful visualization tool.
Edit (by belisarius)
Just a confirmation. The following code gives the same matrix as in #Eelvex's answer:
step[k_, n_] := Normal[SparseArray[{{k, i_} -> (1/n),
{j_, k} -> (1/n), {i_, i_} -> ((n - 1)/n)}, {n, n}]];
r[n_, s_] := Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]];
Last#Table[r[4, i], {i, 1, 4}] // MatrixForm
Wikipedia's page on the Fisher-Yates shuffle has a description and example of exactly what will happen in that case.
You can compute the distribution using stochastic matrices. Let the matrix A(i,j) describe the probability of the card originally at position i ending up in position j. Then the kth swap has a matrix Ak given by Ak(i,j) = 1/N if i == k or j == k, (the card in position k can end up anywhere and any card can end up at position k with equal probability), Ak(i,i) = (N - 1)/N for all i != k (every other card will stay in the same place with probability (N-1)/N) and all other elements zero.
The result of the complete shuffle is then given by the product of the matrices AN ... A1.
I expect you're looking for an algebraic description of the probabilities; you can get one by expanding out the above matrix product, but it I imagine it will be fairly complex!
UPDATE: I just spotted wnoise's equivalent answer above! oops...
I've looked into this further, and it turns out that this distribution has been studied at length. The reason it's of interest is because this "broken" algorithm is (or was) used in the RSA chip system.
In Shuffling by semi-random transpositions, Elchanan Mossel, Yuval Peres, and Alistair Sinclair study this and a more general class of shuffles. The upshot of that paper appears to be that it takes log(n) broken shuffles to achieve near random distribution.
In The bias of three pseudorandom shuffles (Aequationes Mathematicae, 22, 1981, 268-292), Ethan Bolker and David Robbins analyze this shuffle and determine that the total variation distance to uniformity after a single pass is 1, indicating that it is not very random at all. They give asympotic analyses as well.
Finally, Laurent Saloff-Coste and Jessica Zuniga found a nice upper bound in their study of inhomogeneous Markov chains.
This question is begging for an interactive visual matrix diagram analysis of the broken shuffle mentioned. Such a tool is on the page Will It Shuffle? - Why random comparators are bad by Mike Bostock.
Bostock has put together an excellent tool that analyzes random comparators. In the dropdown on that page, choose naïve swap (random ↦ random) to see the broken algorithm and the pattern it produces.
His page is informative as it allows one to see the immediate effects a change in logic has on the shuffled data. For example:
This matrix diagram using a non-uniform and very-biased shuffle is produced using a naïve swap (we pick from "1 to N") with code like this:
function shuffle(array) {
var n = array.length, i = -1, j;
while (++i < n) {
j = Math.floor(Math.random() * n);
t = array[j];
array[j] = array[i];
array[i] = t;
}
}
But if we implement a non-biased shuffle, where we pick from "k to N" we should see a diagram like this:
where the distribution is uniform, and is produced from code such as:
function FisherYatesDurstenfeldKnuthshuffle( array ) {
var pickIndex, arrayPosition = array.length;
while( --arrayPosition ) {
pickIndex = Math.floor( Math.random() * ( arrayPosition + 1 ) );
array[ pickIndex ] = [ array[ arrayPosition ], array[ arrayPosition ] = array[ pickIndex ] ][ 0 ];
}
}
The excellent answers given so far are concentrating on the distribution, but you have asked also "What happens if you make this mistake?" - which is what I haven't seen answered yet, so I'll give an explanation on this:
The Knuth-Fisher-Yates shuffle algorithm picks 1 out of n elements, then 1 out of n-1 remaining elements and so forth.
You can implement it with two arrays a1 and a2 where you remove one element from a1 and insert it into a2, but the algorithm does it in place (which means, that it needs only one array), as is explained here (Google: "Shuffling Algorithms Fisher-Yates DataGenetics") very well.
If you don't remove the elements, they can be randomly chosen again which produces the biased randomness. This is exactly what the 2nd example your are describing does. The first example, the Knuth-Fisher-Yates algorithm, uses a cursor variable running from k to N, which remembers which elements have already been taken, hence avoiding to pick elements more than once.
How would i go about making an LCG (type of pseudo random number generator) travel in both directions?
I know that travelling forward is (a*x+c)%m but how would i be able to reverse it?
I am using this so i can store the seed at the position of the player in a map and be able to generate things around it by propogating backward and forward in the LCG (like some sort of randomized number line).
All LCGs cycle. In an LCG which achieves maximal cycle length there is a unique predecessor and a unique successor for each value x (which won't necessarily be true for LCGs that don't achieve maximal cycle length, or for other algorithms with subcycle behaviors such as von Neumann's middle-square method).
Suppose our LCG has cycle length L. Since the behavior is cyclic, that means that after L iterations we are back to the starting value. Finding the predecessor value by taking one step backwards is mathematically equivalent to taking (L-1) steps forward.
The big question is whether that can be converted into a single step. If you're using a Prime Modulus Multiplicative LCG (where the additive constant is zero), it turns out to be pretty easy to do. If xi+1 = a * xi % m, then xi+n = an * xi % m. As a concrete example, consider the PMMLCG with a = 16807 and m = 231-1. This has a maximal cycle length of m-1 (it can never yield 0 for obvious reasons), so our goal is to iterate m-2 times. We can precalculate am-2 % m = 1407677000 using readily available exponentiation/mod libraries. Consequently, a forward step is found as xi+1 = 16807 * xi % 231-1, while a backwards step is found as xi-1 = 1407677000 * xi % 231-1.
ADDITIONAL
The same concept can be extended to generic full-cycle LCGs by casting the transition in matrix form and doing fast matrix exponentiation to come up with the equivalent one-stage transform. The matrix formulation for xi+1 = (a * xi + c) % m is Xi+1 = T · Xi % m, where T is the matrix [[a c],[0 1]] and X is the column vector (x, 1) transposed. Multiple iterations of the LCG can be quickly calculated by raising T to any desired power through fast exponentiation techniques using squaring and halving the power. After noticing that powers of matrix T never alter the second row, I was able to focus on just the first row calculations and produced the following implementation in Ruby:
def power_mod(ary, mod, power)
return ary.map { |x| x % mod } if power < 2
square = [ary[0] * ary[0] % mod, (ary[0] + 1) * ary[1] % mod]
square = power_mod(square, mod, power / 2)
return square if power.even?
return [square[0] * ary[0] % mod, (square[0] * ary[1] + square[1]) % mod]
end
where ary is a vector containing a and c, the multiplicative and additive coefficients.
Using this with power set to the cycle length - 1, I was able to determine coefficients which yield the predecessor for various LCGs listed in Wikipedia. For example, to "reverse" the LCG with a = 1664525, c = 1013904223, and m = 232, use a = 4276115653 and c = 634785765. You can easily confirm that the latter set of coefficients reverses the sequence produced by using the original coefficients.
Two function P1, P2 are given that take input n-bit x, and calculate y1=A1*x, y2=A2*x. A1 and A2 is n*n bit matrix. these two functions return n-bit array y1,y2. we doesn't know any information about these matrices but know A1 and A2 is the same except one slot (i,j). (i and j are unknown for us). we can call P1 and P2 for different value of x and then compare the output of these two function. I want to find that with how many calls we can find i, j?
In Short answer our book wrote: Log n calls. any hint or idea? thanks to all.
Edit: someone says, at first x be a column matrix of "1's". and calculate y1 and y2 and find the row that different. then x be a matrix that n/2 up elements be "1's" and n/2 bottom element be "0's". if y1 and y2 be equal difference is in n/2+1 to n else be 1 to n/2...
You can do it in two calls if you could transpose A1 and A2:
You can determine i by doing one call an mainly check which entry in y1 and y2 differs. That will give you i.
Transpose A1 and A2, do same thing, the entry which differs is the index of j.
Without transpose: you still do one multiplication to determine i. Now, just do a "binary search" where your searching area will be identified by 1 in the entry of your x vector.
First step: fill half of x vector with 1, the other with 0, do the multiplication, check if the entry at index i is different, if it isn't, then your j is somewhere in the 2nd half, if it is different, then it is in your first half (the one you felt with 1)
Second step: split one of the selected parts from previous step in two, have one half with 1 and the other with 0, repeat same logic until you are left with exactly one entry. That one is the index of your j
Since you are splitting always in 2, you will have exactly log(n) calls.
Determine `i` entry by doing one call. (trivial)
length = n/2
start = 0
while(not found)
var x[start..start+length) = 1 (0 all otter entries)
do function calls
if result1[i] == result2[i]
start = 0
length = length/2
else
start = length+1
length = length/2
if length == 1
found.
start is your index j
While working on an image processing task I have come across the following problem: There are n points in the unit square with coordinates $x_i$ and $y_i$, each assigned with a positive or negative weight $w_i$. Find a rectangle such that the sum of all weights of those points lying within the rectangle is positive and maximal.
By defining a proper grid, the problem can be rephrased as finding a submatrix in an n-by-n matrix A whose sum of elements is maximal. This is also known as the "maximal subrectangle problem" and has been discussed on SO before. While a brute force approach has a run-time of O(n^5), there is a kind of tricky solution with a run-time of O(n^3). It utilizes a solution for the corresponding one-dimensional problem, called "maximal subarray problem", with an O(n) run-time.
I have implemented both algorithms in R and can solve 100s of points in a few seconds. But with thousands of points it will be much too slow, probably even when outsourcing the loops to some Fortran or C code.
Now look at the matrix A. When assuming (w/o loss of generality) that all points have different x- or y-coordinates, A has a special form: In each row and column of A there is exactly one non-zero element. For matrices with this special property I assume there should be an algorithm performing the task in O(n^2) time, or even better.
Here is an example with the optimal rectangle added:
set.seed(723)
N <- 50; w <- rnorm(N)
x <- runif(N); y <- runif(N)
clr <- ifelse (w >= 0, "blue", "red")
plot(x, y, pch = 20, col = clr, xlim = c(0, 1), ylim = c(0, 1))
rect(0.075, 0.45, 0.31, 0.95, border="gray")
You see that there can be red, ie. negative, points in the optimal rectangle. It also shows that it will not suffice to solve the one-dimensional cases for the x- and y-coordinates.
I will translate the standard solution into Fortran, but I would surely like to have a more efficient algorithm at hand.
These guys (found from the wiki page) claim to have a simpler sub-cubic solution for the 2-dimensional case. It may be the one you're already aware of.
See the accepted answer for "Maximum sum subrectangle in a sparse matrix". For an nxn matrix with m non-zero elements, the solution there takes O(nm log n) time. So, for you, since you have exactly n non-zero elements, this would give O(n^2 log n) time. Probably you'll be able to handle cases with n being 50 times larger or more, vs. the standard O(n^3) solution.
The best I can do is O(n^2 log n).
If we look at the n+1 choose 2 calls made by Kadane's 2D algorithm to Kadane's 1D algorithm on an input of your type, all but O(n) successive pairs are on 1D arrays that differ only in one element. I'm going to present a divide-and-conquer variant of Kadane's 1D; by caching the outcomes of each recursive call, only the O(log n) that involve the changed array element have to be recomputed, reducing the (amortized) running time of the inner loop from Theta(n) to Theta(log n).
def maxsubarray(arr, a, b):
# this function returns a 4-tuple
# element 0 is the max over intervals of the form [i, j)
# element 1 is the max over intervals of the form [i, b)
# element 2 is the max over intervals of the form [a, j)
# element 3 is the max over intervals of the form [a, b), i.e., sum(arr[a:b])
n = b - a
if n == 0:
return (0, 0, 0, 0)
elif n == 1:
x = arr[a]
y = max(x, 0)
return (y, y, y, x)
else:
m = a + n // 2
l = maxsubarray(arr, a, m)
r = maxsubarray(arr, m, b)
return (max(l[0], r[0], l[1] + r[2]),
max(r[1], l[1] + r[3]),
max(l[2], l[3] + r[2]),
l[3] + r[3])
The famous Fisher-Yates shuffle algorithm can be used to randomly permute an array A of length N:
For k = 1 to N
Pick a random integer j from k to N
Swap A[k] and A[j]
A common mistake that I've been told over and over again not to make is this:
For k = 1 to N
Pick a random integer j from 1 to N
Swap A[k] and A[j]
That is, instead of picking a random integer from k to N, you pick a random integer from 1 to N.
What happens if you make this mistake? I know that the resulting permutation isn't uniformly distributed, but I don't know what guarantees there are on what the resulting distribution will be. In particular, does anyone have an expression for the probability distributions over the final positions of the elements?
An Empirical Approach.
Let's implement the erroneous algorithm in Mathematica:
p = 10; (* Range *)
s = {}
For[l = 1, l <= 30000, l++, (*Iterations*)
a = Range[p];
For[k = 1, k <= p, k++,
i = RandomInteger[{1, p}];
temp = a[[k]];
a[[k]] = a[[i]];
a[[i]] = temp
];
AppendTo[s, a];
]
Now get the number of times each integer is in each position:
r = SortBy[#, #[[1]] &] & /# Tally /# Transpose[s]
Let's take three positions in the resulting arrays and plot the frequency distribution for each integer in that position:
For position 1 the freq distribution is:
For position 5 (middle)
And for position 10 (last):
and here you have the distribution for all positions plotted together:
Here you have a better statistics over 8 positions:
Some observations:
For all positions the probability of
"1" is the same (1/n).
The probability matrix is symmetrical
with respect to the big anti-diagonal
So, the probability for any number in the last
position is also uniform (1/n)
You may visualize those properties looking at the starting of all lines from the same point (first property) and the last horizontal line (third property).
The second property can be seen from the following matrix representation example, where the rows are the positions, the columns are the occupant number, and the color represents the experimental probability:
For a 100x100 matrix:
Edit
Just for fun, I calculated the exact formula for the second diagonal element (the first is 1/n). The rest can be done, but it's a lot of work.
h[n_] := (n-1)/n^2 + (n-1)^(n-2) n^(-n)
Values verified from n=3 to 6 ( {8/27, 57/256, 564/3125, 7105/46656} )
Edit
Working out a little the general explicit calculation in #wnoise answer, we can get a little more info.
Replacing 1/n by p[n], so the calculations are hold unevaluated, we get for example for the first part of the matrix with n=7 (click to see a bigger image):
Which, after comparing with results for other values of n, let us identify some known integer sequences in the matrix:
{{ 1/n, 1/n , ...},
{... .., A007318, ....},
{... .., ... ..., ..},
... ....,
{A129687, ... ... ... ... ... ... ..},
{A131084, A028326 ... ... ... ... ..},
{A028326, A131084 , A129687 ... ....}}
You may find those sequences (in some cases with different signs) in the wonderful http://oeis.org/
Solving the general problem is more difficult, but I hope this is a start
The "common mistake" you mention is shuffling by random transpositions. This problem was studied in full detail by Diaconis and Shahshahani in Generating a random permutation with random transpositions (1981). They do a complete analysis of stopping times and convergence to uniformity. If you cannot get a link to the paper, then please send me an e-mail and I can forward you a copy. It's actually a fun read (as are most of Persi Diaconis's papers).
If the array has repeated entries, then the problem is slightly different. As a shameless plug, this more general problem is addressed by myself, Diaconis and Soundararajan in Appendix B of A Rule of Thumb for Riffle Shuffling (2011).
Let's say
a = 1/N
b = 1-a
Bi(k) is the probability matrix after i swaps for the kth element. i.e the answer to the question "where is k after i swaps?". For example B0(3) = (0 0 1 0 ... 0) and B1(3) = (a 0 b 0 ... 0). What you want is BN(k) for every k.
Ki is an NxN matrix with 1s in the i-th column and i-th row, zeroes everywhere else, e.g:
Ii is the identity matrix but with the element x=y=i zeroed. E.g for i=2:
Ai is
Then,
But because BN(k=1..N) forms the identity matrix, the probability that any given element i will at the end be at position j is given by the matrix element (i,j) of the matrix:
For example, for N=4:
As a diagram for N = 500 (color levels are 100*probability):
The pattern is the same for all N>2:
The most probable ending position for k-th element is k-1.
The least probable ending position is k for k < N*ln(2), position 1 otherwise
I knew I had seen this question before...
" why does this simple shuffle algorithm produce biased results? what is a simple reason? " has a lot of good stuff in the answers, especially a link to a blog by Jeff Atwood on Coding Horror.
As you may have already guessed, based on the answer by #belisarius, the exact distribution is highly dependent on the number of elements to be shuffled. Here's Atwood's plot for a 6-element deck:
What a lovely question! I wish I had a full answer.
Fisher-Yates is nice to analyze because once it decides on the first element, it leaves it alone. The biased one can repeatedly swap an element in and out of any place.
We can analyze this the same way we would a Markov chain, by describing the actions as stochastic transition matrices acting linearly on probability distributions. Most elements get left alone, the diagonal is usually (n-1)/n. On pass k, when they don't get left alone, they get swapped with element k, (or a random element if they are element k). This is 1/(n-1) in either row or column k. The element in both row and column k is also 1/(n-1). It's easy enough to multiply these matrices together for k going from 1 to n.
We do know that the element in last place will be equally likely to have originally been anywhere because the last pass swaps the last place equally likely with any other. Similarly, the first element will be equally likely to be placed anywhere. This symmetry is because the transpose reverses the order of matrix multiplication. In fact, the matrix is symmetric in the sense that row i is the same as column (n+1 - i). Beyond that, the numbers don't show much apparent pattern. These exact solutions do show agreement with the simulations run by belisarius: In slot i, The probability of getting j decreases as j raises to i, reaching its lowest value at i-1, and then jumping up to its highest value at i, and decreasing until j reaches n.
In Mathematica I generated each step with
step[k_, n_] := Normal[SparseArray[{{k, i_} -> 1/n,
{j_, k} -> 1/n, {i_, i_} -> (n - 1)/n} , {n, n}]]
(I haven't found it documented anywhere, but the first matching rule is used.)
The final transition matrix can be calculated with:
Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]]
ListDensityPlot is a useful visualization tool.
Edit (by belisarius)
Just a confirmation. The following code gives the same matrix as in #Eelvex's answer:
step[k_, n_] := Normal[SparseArray[{{k, i_} -> (1/n),
{j_, k} -> (1/n), {i_, i_} -> ((n - 1)/n)}, {n, n}]];
r[n_, s_] := Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]];
Last#Table[r[4, i], {i, 1, 4}] // MatrixForm
Wikipedia's page on the Fisher-Yates shuffle has a description and example of exactly what will happen in that case.
You can compute the distribution using stochastic matrices. Let the matrix A(i,j) describe the probability of the card originally at position i ending up in position j. Then the kth swap has a matrix Ak given by Ak(i,j) = 1/N if i == k or j == k, (the card in position k can end up anywhere and any card can end up at position k with equal probability), Ak(i,i) = (N - 1)/N for all i != k (every other card will stay in the same place with probability (N-1)/N) and all other elements zero.
The result of the complete shuffle is then given by the product of the matrices AN ... A1.
I expect you're looking for an algebraic description of the probabilities; you can get one by expanding out the above matrix product, but it I imagine it will be fairly complex!
UPDATE: I just spotted wnoise's equivalent answer above! oops...
I've looked into this further, and it turns out that this distribution has been studied at length. The reason it's of interest is because this "broken" algorithm is (or was) used in the RSA chip system.
In Shuffling by semi-random transpositions, Elchanan Mossel, Yuval Peres, and Alistair Sinclair study this and a more general class of shuffles. The upshot of that paper appears to be that it takes log(n) broken shuffles to achieve near random distribution.
In The bias of three pseudorandom shuffles (Aequationes Mathematicae, 22, 1981, 268-292), Ethan Bolker and David Robbins analyze this shuffle and determine that the total variation distance to uniformity after a single pass is 1, indicating that it is not very random at all. They give asympotic analyses as well.
Finally, Laurent Saloff-Coste and Jessica Zuniga found a nice upper bound in their study of inhomogeneous Markov chains.
This question is begging for an interactive visual matrix diagram analysis of the broken shuffle mentioned. Such a tool is on the page Will It Shuffle? - Why random comparators are bad by Mike Bostock.
Bostock has put together an excellent tool that analyzes random comparators. In the dropdown on that page, choose naïve swap (random ↦ random) to see the broken algorithm and the pattern it produces.
His page is informative as it allows one to see the immediate effects a change in logic has on the shuffled data. For example:
This matrix diagram using a non-uniform and very-biased shuffle is produced using a naïve swap (we pick from "1 to N") with code like this:
function shuffle(array) {
var n = array.length, i = -1, j;
while (++i < n) {
j = Math.floor(Math.random() * n);
t = array[j];
array[j] = array[i];
array[i] = t;
}
}
But if we implement a non-biased shuffle, where we pick from "k to N" we should see a diagram like this:
where the distribution is uniform, and is produced from code such as:
function FisherYatesDurstenfeldKnuthshuffle( array ) {
var pickIndex, arrayPosition = array.length;
while( --arrayPosition ) {
pickIndex = Math.floor( Math.random() * ( arrayPosition + 1 ) );
array[ pickIndex ] = [ array[ arrayPosition ], array[ arrayPosition ] = array[ pickIndex ] ][ 0 ];
}
}
The excellent answers given so far are concentrating on the distribution, but you have asked also "What happens if you make this mistake?" - which is what I haven't seen answered yet, so I'll give an explanation on this:
The Knuth-Fisher-Yates shuffle algorithm picks 1 out of n elements, then 1 out of n-1 remaining elements and so forth.
You can implement it with two arrays a1 and a2 where you remove one element from a1 and insert it into a2, but the algorithm does it in place (which means, that it needs only one array), as is explained here (Google: "Shuffling Algorithms Fisher-Yates DataGenetics") very well.
If you don't remove the elements, they can be randomly chosen again which produces the biased randomness. This is exactly what the 2nd example your are describing does. The first example, the Knuth-Fisher-Yates algorithm, uses a cursor variable running from k to N, which remembers which elements have already been taken, hence avoiding to pick elements more than once.