Algorithm X to Solve the Exact Cover: Fat Matrices - algorithm

As I was reading about Knuth's Algorithm X to solve the exact cover problem, I thought of an edge case that I wanted some clarification on.
Here are my assumptions:
Given a matrix A, Algorithm X's "goal is to select a subset of the rows so that the digit 1 appears in each column exactly once."
If the matrix is empty, the algorithm terminates successfully and the solution is then the subset of rows logged in the partial solution up to that point.
If there is a column of 0's, the algorithm terminates unsuccessfully.
For reference: http://en.wikipedia.org/wiki/Algorithm_X
Consider the matrix A:
[[1 1 0]
[0 1 1]]
Steps I took:
Given Matrix A:
1. Choose a column, c, with the least number of 1's. I choose: column 1
2. Choose a row, r, that contains to a 1 in column c. I choose: row 1
3. Add r to the partial solution.
4. For each column j such that A(r, j) = 1:
For each row i such that A(i, j) = 1:
delete row i
delete column j
5. Matrix A is empty. Algorithm terminates successfully and solution is allegedly: {row 1}.
However, this is clearly not the case as row 1 only consists of [1 1 0] and clearly does not cover the 3rd column.
I would assume that the algorithm should at some point reduce the matrix to the point where there is only a single 0 and terminate unsuccessfully.
Could someone please explain this?

I think the confusion here is simply in the use of the term empty matrix. If you read Knuth's original paper (linked on the Wikipedia article you cited), you can see that he was treating the rows and columns as doubly-linked lists. When he says that the matrix is empty, he doesn't mean that it has no entries, he means that all the row and column objects have been deleted.
To clarify, I'll label the rows with lower case letters and the columns with upper case letters, as follows:
| A | B | C
---------------
a | 1 | 1 | 0
---------------
b | 0 | 1 | 1
The algorithm states that you choose a column deterministically (using any rule you wish), and he suggests choosing a column with the fewest number of 1's. So, we'll proceed as you suggest and choose column A. The only row with a 1 in column A is row a, so we choose row a and add it to the possible solution { a }. Now, row a has 1s in columns A and B, so we must delete those columns, and any rows containing 1s in those columns, that is, rows a and b, just as you did. The resulting matrix has a single column C and no rows:
| C
-------
This is not an empty matrix (it has a column remaining). However, column C has no 1s in it, so we terminate unsuccessfully, as the algorithm indicates.
This may seem odd, but it is a very important case if we intend to use an incidence matrix for the Exact Cover Problem, because columns represent elements of the set X that we wish to cover and rows represents subsets of X. So a matrix with some columns and no rows represents the exact covering problem where the collection of subsets to choose from is empty (but there are still points to cover).
If this description causes problems for your implementation, there is a simple workaround: just include the empty set in every problem. The empty set (containing no points of X) is represented by a row of all zeros. It is never selected by your algorithm as part of a solution, never collides with any other selected rows, but always ensures that the matrix is nonempty (there is at least one row) until all the columns have been deleted, which is really all you care about since you need to make sure that each column is covered by some row.

Related

How to sum columns of an adjacency matrix in O(n) time

I have an n x n adjacency matrix for a directed graph. I want to search to see if any of the columns sum up to n. The problem is that I have to do this in O(n) time. Is there any way to approach this with O(n) time or is it not possible (no code required)?
For reference, below is the question I am trying to solve:
Problem:
During the school bandmates election, each person has a few preferences for the president and the set of preferences for a person includes him/herself all the time. The "perfect" president is the one who is in the set of preferences of every person and who does not prefer anyone but him/herself. All we want to know is whether such a person exists or not.
Define a directed graph with the set of candidates as the vertices and a directed edge from vertex a to vertex b if and only if b is in the set of preferences of a.
There are n people
We want an algorithm which executes in O(n) time
We are given the graph described above in the form of an n x n adjacency matrix
I figured that if everyone votes for the "perfect president", then he/she will have n incoming nodes, therefore summing the column should give us n. If there is a better way to approach this than the way I am doing it, any hints would be appreciated to guide me in the right direction.
Let me repeat the rules and give them numbers for easy reference:
All prefer themselves
All prefer the perfect president (if there is one)
The perfect president (if there is one) does not prefer anyone else
Let's define the adjacency matrix so that the value in row i and column j is 1 when person i has a preference for person j. All other values are 0.
The above three rules can be reformulated in terms of the adjacency matrix as follows:
The main diagonal of the adjacency matrix will have all 1's.
If the perfect president is number i, then column i will have all 1's.
If the perfect president is number i, then row i will have all 0's, except in column i.
Note that there cannot be two or more perfect presidents, as then they would have to prefer each other (rule 2), which violates rule 3.
Algorithm: Zig-Zag Phase
The perfect president (if existing) can be found in linear time, by zig-zagging from the top-left corner of the adjacency matrix (row 0, column 0), according to the following rules:
If the value is 1, then move down to the next row (same column)
If the value is 0, then move right to the next column (same row)
While staying within the boundaries of the matrix, keep repeating the first two steps.
If you exit the matrix this phase ends. Let's call the column, where we exit the matrix, column p.
Observation: because of rule 1, this part of the algorithm will never get into the values above the main diagonal: the 1-values on that diagonal are like a wall that pushes us downward whenever we bump into it. You will therefore always exit the matrix via a downward move from the last row. In the most extreme case it would be a downward move at the 1 that is in the right-bottom corner (on the diagonal) of the matrix.
Here is a simple example of an adjacency matrix where arrows indicate how the algorithm dictates a path from the top-left corner to a 1 somewhere in the bottom row:
1 1 0 1 0
↓
1 1 0 1 0
↓
0 → 1 1 1 0
↓
0 0 → 0 → 1 0
↓
0 1 1 1 1
↓
=p
Note that in this example there is a perfect president, but that is of course not a requirement: the algorithm will give you a value p whether or not there is a perfect president.
Claim: The perfect president, if there is one, has to be the person with number p.
Proof
Given is the p returned by the above zig-zag algorithm.
First we prove by contradiction that:
a) the perfect president cannot be a person with a number i less than p.
So let's assume the contrary: the perfect president is person i and i < p:
Since we start the zig-zag phase in column 0 and ended up in column p, and since we cannot skip a column during the process, there was a moment that we were in column i. Since we didn't stay in that column, it must mean that column i has a zero in one of its rows which called us to move to the right. But rule 2 demands that if i is a perfect president, column i should only have 1-values (rule 2). Contradiction!
Secondly we prove by contradiction that:
b) the perfect president cannot be a person with a number i greater than p.
So let's assume the contrary: the perfect president is person i and i > p:
Since we start the zig-zag phase in row 0 and reached the last row (cf. the observation), and since we cannot skip a row during the process, there was a moment that we were in row i. Since we didn't stay in that row, but moved down at some point (cf. the observation: we moved out of the matrix with a downward move), it must mean that row i has a 1 in one of its columns which called us to move downwards. This 1 cannot be the 1 that is on the diagonal (at [i,i]), because we did not reach column i: i is greater than p and the algorithm ended in column p. So it must have been another 1, a second one, in row i.
But rule 3 demands that if i is a perfect president, row i should only have one 1-value, with all other values being zeroes. Contradiction!
These two proofs by contradiction leave us no other possible number for the perfect president, if there is one, than number p.
Algorithm: Verification Phase
To actually check whether person p is indeed a perfect president is trivial: we can just check whether column p contains all 1's, and that row p only contains 0's, except in column i.
Time Complexity
All this can be done in linear time: at most 2n-1 reads from the matrix are required in the zig-zag phase, and 2n-2 more in the final verification phase. Thus this is O(n).

Given 2d matrix find minimum sum of elements such that element is chosen one from each row and column?

Find minimum sum of elements of n*n 2D matrix such that I have to choose one and only one element from each row and column ?
Eg
4 12
6 6
If I choose 4 from row 1 I cannot choose 12 from row 1 also from column 1 ,
I can only choose 6 from row 2 column 2.
So likewise minimum sum would be 4 + 6 = 10 where 6 is from second row second column
and not 6 + 12 = 18 where 6 is from second row first column
also 4 + 12 is not allowed since both are from same row
I thought of brute force where once i pick element from row and column I cannot pick another but this approach is O(n!)
.
Theorem: If a number is added to or subtracted from all
of the entries of any one row or column of the matrix,
then elements selected to get required minimum sum of the resulting matrix
are the same elements selected to get required minimum sum of the original matrix.
The Hungarian Algorithm (a special case of min-cost flow problem) uses this theorem to select those elements satisfying the constraints given in your problem :
Subtract the smallest entry in each row from all the entries of its
row.
Subtract the smallest entry in each column from all the entries
of its column.
Draw lines through appropriate rows and columns so that all the
zero entries of the cost matrix are covered and the minimum number of
such lines is used.
Test for Optimality
i. If the minimum number of covering lines is n, an optimal assignment of zeros is possible and we are finished.
ii. If the minimum number of covering lines is less than n, an optimal
assignment of zeros is not yet possible. In that case, proceed to Step 5.
Determine the smallest entry not covered by any line. Subtract
this entry from each uncovered row, and then add it to each covered
column. Return to Step 3.
See this example for better understanding.
Both O(n4) (easy and fast to implement) and O(n3) (harder to implement) implementations are explained in detail here.

Algorithm to find largest identical-row square in matrix

I have a matrix of 100x100 size and need to find the largest set of rows and columns that create a square having equal rows. Example:
A B C D E F C D E
a 0 1 2 3 4 5 a 2 3 4
b 2 9 7 9 8 2
c 9 0 6 8 9 7 ==>
d 8 9 2 3 4 8 d 2 3 4
e 7 2 2 3 4 5 e 2 3 4
f 0 3 6 8 7 2
Currently I am using this algorithm:
candidates = [] // element type is {rows, cols}
foreach row
foreach col
candidates[] = {[row], [col]}
do
retval = candidates.first
foreach candidates as candidate
foreach newRow > candidates.rows.max
foreach newCol > candidates.cols.max
// compare matrix cells in candidate to newRow and newCol
if (newCandidateHasEqualRows)
newCandidates[] = {candidate.rows+newRow, candidate.cols+newCol}
candidates = newCandidates
while candidates.count
return retval
Has anyone else come across a problem similar to this? And is there a better algorithm to solve it?
Here's the NP-hardness reduction I mentioned, from biclique. Given a bipartite graph, make a matrix with a row for each vertex in part A and a column for each vertex in part B. For every edge that is present, put a 0 in the corresponding matrix entry. Put a unique positive integer for each other matrix entry. For all s > 1, there is a Ks,s subgraph if and only if there is a square of size s (which necessarily is all zero).
Given a fixed set of rows, the optimal set of columns is easily determined. You could try the a priori algorithm on sets of rows, where a set of rows is considered frequent iff there exist as many columns that, together with the rows, form a valid square.
I've implemented a branch and bound solver for this problem in C++ at http://pastebin.com/J1ipWs5b. To my surprise, it actually solves randomly-generated puzzles of size up to 100x100 quite quickly: on one problem with each matrix cell chosen randomly from 0-9, an optimal 4x4 solution is found in about 750ms on my old laptop. As the range of cell entries is reduced down to just 0-1, the solution times get drastically longer -- but still, at 157s (for the one problem I tried, which had an 8x8 optimal solution), this isn't terrible. It seems to be very sensitive to the size of the optimal solution.
At any point in time, we have a partial solution consisting of a set of rows that are definitely included, and a set of rows that are definitely excluded. (The inclusion status of the remaining rows is yet to be determined.) First, we pick a remaining row to "try". We try including the row; then (if necessary; see below) we try excluding it. "Trying" here means recursively solving the corresponding subproblem. We record the set of columns that are identical across all rows that are definitely included in the solution. As rows are added to the partial solution, this set of columns can only shrink.
There are a couple of improvements beyond the standard B&B idea of pruning the search when we determine that we can't develop the current partial solution into a better (i.e. larger) complete solution than some complete solution we have already found:
A dominance rule. If there are any rows that can be added to the current partial solution without shrinking the set of identical columns at all, then we can safely add them immediately, and we never have to consider not adding them. This potentially saves a lot of branching, especially if there are many similar rows in the input.
We can reorder the remaining (not definitely included or definitely excluded) rows arbitrarily. So in particular, we can always pick as the next row to consider the row that would most shrink the set of identical columns: this (perhaps counterintuitive) strategy has the effect of eliminating bad combinations of rows near the top of the search tree, which speeds up the search a lot. It also happens to complement the dominance rule above, because it means that if there are ever two rows X and Y such that X preserves a strict subset of the identical columns that Y preserves, then X will be added to the solution first, which in turn means that whenever X is included, Y will be forced in by the dominance rule and we don't need to consider the possibility of including X but excluding Y.

Hungarian algorithm matching one set to itself

I'm looking for a variation on the Hungarian algorithm (I think) that will pair N people to themselves, excluding self-pairs and reverse-pairs, where N is even.
E.g. given N0 - N6 and a matrix C of costs for each pair, how can I obtain the set of 3 lowest-cost pairs?
C = [ [ - 5 6 1 9 4 ]
[ 5 - 4 8 6 2 ]
[ 6 4 - 3 7 6 ]
[ 1 8 3 - 8 9 ]
[ 9 6 7 8 - 5 ]
[ 4 2 6 9 5 - ] ]
In this example, the resulting pairs would be:
N0, N3
N1, N4
N2, N5
Having typed this out I'm now wondering if I can just increase the cost values in the "bottom half" of the matrix... or even better, remove them.
Is there a variation of Hungarian that works on a non-square matrix?
Or, is there another algorithm that solves this variation of the problem?
Increasing the values of the bottom half can result in an incorrect solution. You can see this as the corner coordinates (in your example coordinates 0,1 and 5,6) of the upper half will always be considered to be in the minimum X pairs, where X is the size of the matrix.
My Solution for finding the minimum X pairs
Take the standard Hungarian algorithm
You can set the diagonal to a value greater than the sum of the elements in the unaltered matrix — this step may allow you to speed up your implementation, depending on how your implementation handles nulls.
1) The first step of the standard algorithm is to go through each row, and then each column, reducing each row and column individually such that the minimum of each row and column is zero. This is unchanged.
The general principle of this solution, is to mirror every subsequent step of the original algorithm around the diagonal.
2) The next step of the algorithm is to select rows and columns so that every zero is included within the selection, using the minimum number of rows and columns.
My alteration to the algorithm means that when selecting a row/column, also select the column/row mirrored around that diagonal, but count it as one row or column selection for all purposes, including counting the diagonal (which will be the intersection of these mirrored row/column selection pairs) as only being selected once.
3) The next step is to check if you have the right solution — which in the standard algorithm means checking if the number of rows and columns selected is equal to the size of the matrix — in your example if six rows and columns have been selected.
For this variation however, when calculating when to end the algorithm treat each row/column mirrored pair of selections as a single row or column selection. If you have the right solution then end the algorithm here.
4) If the number of rows and columns is less than the size of the matrix, then find the smallest unselected element, and call it k. Subtract k from all uncovered elements, and add it to all elements that are covered twice (again, counting the mirrored row/column selection as a single selection).
My alteration of the algorithm means that when altering values, you will alter their mirrored values identically (this should happen naturally as a result of the mirrored selection process).
Then go back to step 2 and repeat steps 2-4 until step 3 indicates the algorithm is finished.
This will result in pairs of mirrored answers (which are the coordinates — to get the value of these coordinates refer back to the original matrix) — you can safely delete half of each pair arbitrarily.
To alter this algorithm to find the minimum R pairs, where R is less than the size of the matrix, reduce the stopping point in step 3 to R. This alteration is essential to answering your question.
As #Niklas B, stated you are solving Weighted perfect matching problem
take a look at this
here is part of document describing Primal-dual algorithm for weighted perfect matching
Please read all and let me know if is useful to you

Generating Random Matrix With Pairwise Distinct Rows and Columns

I need to randomly generate an NxN matrix of integers in the range 1 to K inclusive such that all rows and columns individually have the property that their elements are pairwise distinct.
For example for N=2 and K=3
This is ok:
1 2
2 1
This is not:
1 3
1 2
(Notice that if K < N this is impossible)
When K is sufficiently larger than N an efficient enough algorithm is just to generate a random matrix of 1..K integers, check that each row and each column is pairwise distinct, and if it isn't try again.
But what about the case where K is not much larger than N?
This is not a full answer, but a warning about an intuitive solution that does not work.
I am assuming that by "randomly generate" you mean with uniform probability on all existing such matrices.
For N=2 and K=3, here are the possible matrices, up to permutations of the set [1..K]:
1 2 1 2 1 2
2 1 2 3 3 1
(since we are ignoring permutations of the set [1..K], we can assume wlog that the first line is 1 2).
Now, an intuitive (but incorrect) strategy would be to draw the matrix entries one by one, ensuring for each entry that it is distinct from the other entries on the same line or column.
To see why it's incorrect, consider that we have drawn this:
1 2
x .
and we are now drawing x. x can be 2 or 3, but if we gave each possibility the probability 1/2, then the matrix
1 2
3 1
would get probability 1/2 of being drawn at the end, while it should have only probability 1/3.
Here is a (textual) solution. I don't think it provides good randomness, but nevertherless it could be ok for your application.
Let's generate a matrix in the range [0;K-1] (you will do +1 for all elements if you want to) with the following algorithm:
Generate the first line with any random method you want.
Each number will be the first element of a random sequence calculated in such a manner that you are guarranteed to have no duplicate in subsequent rows, that is for any distinct column x and y, you will have x[i]!=y[i] for all i in [0;N-1].
Compute each row for the previous one.
All the algorithm is based on the random generator with the property I mentioned. With a quick search, I found that the Inversive congruential generator meets this requirement. It seems to be easy to implement. It works if K is prime; if K is not prime, see on the same page 'Compound Inversive Generators'. Maybe it will be a little tricky to handle with perfect squares or cubic numbers (your problem sound like sudoku :-) ), but I think it is possible by creating compound generators with prime factors of K and different parametrization. For all generators, the first element of each column is the seed.
Whatever the value of K, the complexity is only depending on N and is O(N^2).
Deterministically generate a matrix having the desired property for rows and columns. Provided K > N, this can easily be done by starting the ith row with i, and filling in the rest of the row with i+1, i+2, etc., wrapping back to 1 after K. Other algorithms are possible.
Randomly permute columns, then randomly permute rows.
Let's show that permuting rows (i.e. picking up entire rows and assembling a new matrix from them in some order, with each row possibly in a different vertical position) leaves the desired properties intact for both rows and columns, assuming they were true before. The same reasoning then holds for column permutations, and for any sequence of permutations of either kind.
Trivially, permuting rows cannot change the property that, within each row, no element appears more than once.
The effect of permuting rows on a particular column is to reorder the elements within that column. This holds for any column, and since reordering elements cannot produce duplicate elements where there were none before, permuting rows cannot change the property that, within each column, no element appears more than once.
I'm not certain whether this algorithm is capable of generating all possible satisfying matrices, or if it does, whether it will generate all possible satisfying matrices with equal probability. Another interesting question that I don't have an answer for is: How many rounds of row-permutation-then-column-permutation are needed? More precisely, is any finite sequence of row-perm-then-column-perm rounds equivalent to a bounded number of (or in particular, one) row-perm-then-column-perm round? If so then nothing is gained by further permutations after the first row and column permutations. Perhaps someone with a stronger mathematics background can comment. But it may be good enough in any case.

Resources