Given an N x N matrix, where N <= 25, and each cell has a positive integer value, how can you partition it with at most K lines (with straight up/down lines or straight left/right lines [note: they have to extend from one side to the other]) so that the maximum value group (as determined by the partitions) is minimum?
For example, given the following matrix
1 1 2
1 1 2
2 2 4
and we are allowed to use 2 lines to partition it, we should draw a line between column 2 and 3, as well as a line between rows 2 and 3, which gives the minimized maximum value, 4.
My first thought would be a bitmask representing the state of each lines, with 2 integers to represent it. However, this is too slow. I think the complexity is O(2^(2N))Maybe you could solve it for the rows, then solve it for the columns?
Anyone have any ideas?
Edit: Here is the problem after I googled it: http://www.sciencedirect.com/science/article/pii/0166218X94001546
another paper: http://cis.poly.edu/suel/papers/pxp.pdf
I'm trying to read that^
You can try all subsets for vertical lines, and then do dynamic programming for horizontal lines.
Let's say you have fixed the set of vertical lines as S. Denote the answer for the subproblem consisting of first K lines of matrix with fixed set of vertical lines S as D(K, S). It is then trivial to find a recurrence to solve D(K, S) with subproblems of smaller size.
Overall complexity should be O(2^N * N^2) if you precompute the sizes of each submatrix in the beginning.
Related
Two different lists having radii of upper hemisphere and lower hemisphere is provided. The first list consists of N upper hemispheres indexed 1 to N and the second has M lower hemispheres indexed 1 to M. A sphere of radius of R can be made taking one upper half of the radius R and one lower half of the radius R. Also, you can put a sphere into a bigger one and create a sequence of nested concentric spheres. But you can't put two or more spheres directly into another one.
If there is a sequence of (D+1) nested spheres, we can call this sequence as a D-sequence.
Find out how many different X-sequence are possible (1 <= X <= C). An X sequence is different from another if the index of any of the hemisphere used in one X-sequence is different from the other.
INPUT
The first line contains a three integers: N denoting the number of upper sphere halves, M denoting the number of lower sphere halves and C.
The second line contains N space-separated integers denoting the radii of upper hemispheres.
The third line contains M space-separated integers denoting the radii of lower hemispheres.
OUTPUT
Output a single line containing C space-separated integers , the number of ways there are to build i-sequence in modulo 1000000007.
Example
Input
3 4 3
1 2 3
1 1 3 2
Output
5 2 0
I am looking for those elements which are part of both the lists of upper as well as lower hemispheres, so that they can form a sphere and then taking their maximum count by comparing their counts in both radii lists.
And, So, for different C sum of products of counts of C+1 elements yields the answer.
How to calculate the above efficiently or is there any other approach ??
Guys this is my first answer. Spare me the whip for now as i am here to learn.
You first find the numbers of spheres possible for each radii.
no of spheres: 2 1 1
Having Radii: 1 2 3
Now since we can fit a sphere with radius r inside a sphere with radii R such that R>r, all we need to do is to find the no . of increasing subsequences of length 2,3,...till c in the list of all possible spheres formed.
List of possible spheres:[1,1*,2,3](* used for marking)
consider D1: it has 2 spheres. Try finding the no. of increasing subsequences of length 2 in the above list.
They are:
[1,2],[1*,2][1,3][1*,3][2,3]
hence the ans is 5.
Get it??
Now how to solve:
It can be done by using Dp. Naive solution has complexity .O(n^2*constant).
You may follow along the lines as provided in the following link :Dp solution.
It is worth mentioning that faster methods do exist which use BIT , segment trees etc.
It is similar to this SPOJ problem.
I asked whether this problem was NP-complete on the Computer Science forum, but asking for programming heuristics seems better suited for this site. So here it goes.
You are given an NxN grid of unit squares and 2N binary strings of length N. The goal is to fill the grid with 0's and 1's so that each string appears once and only once in the grid, either horizontally (left to right) or vertically (top down). Or determine that no such solution exists. If N is not fixed I suspect this is an NP-complete problem. However are there any heuristics that can hopefully speed up the search to faster than brute force trying all ways to fill in the grid with N vertical strings?
I remember programming this for my friend that had the 5x5 physical version of this game, but I used brute force back then. I can only think of this heuristic:
Consider a 4x4 map with these 8 strings (read each from left to right):
1 1 0 1
1 0 0 1
1 0 1 1
1 0 1 0
1 1 1 1
1 0 0 0
0 0 1 1
1 1 1 0
(Note that this is already solved, since the second 4 is the first 4 transposed)
First attempt:
We will choose columns from left to right. Since 7 of 8 strings start with 1, we will try to put the one with most 1s to the first column (so that we can lay rows more easily when columns are done).
In the second column, most string have 0, so you can also try putting a string with most zeros to the second row, and so on.
This i would call a wide-1 prediction, since it only looks at one column at a time
(Possible) Improvement:
You can look at 2 columns at a time (a wide-2 prediction, if i may call it like that). In this case, from the 8 strings, the most common combination of first two bits is 10 (5/8), so you would like to choose first two columns so the the combination 10 occurring as much as possible (in this case, 1111 followed by 1000 has 3 of 4 10 at start).
(Of course you don't have to stop at 2)
Weaknesses:
I don't know if this would work. I just made it up and thought it might work.
If you choose to he wide-X prediction, the number of possibilities is exponential with X
This can absolutely fail if the distribution of combinations if even.
What you can do:
As i said, this game has physical 5x5 adaptation, only there you can also lay the string from right-to-left and bottom-to-top, if you found that name, you could google further. I unfortunately don't remember it.
Sounds like you want the crossword grid filling algorithm:
First, build 2N subsets of your 2N strings -- each subset has all the strings with a particular bit at a particular postion. So subset(0,3) is all the strings that have a 0 in the 3rd position and subset(1,5) is all the strings that have a 1 in the 5th position.
The algorithm is a basic brute-force depth fist search trying all possible mappings of strings to slots in the grid, with severe pruning of impossible branches
Your search state is a set of assignments of strings to slots and a set of sets of possible assignments to the remaining slots. The initial state has 0 assignments and 2N sets, all of which contain all 2N strings.
At each step of the search, pick the most constrained set (the set with the fewest elements) from the set of possible sets. Try each element of the set in turn in that slot (adding it to the assigments and removing it from the set of sets), and constrain all the remaining sets of sets by removing the chosen string and intersecting the crossing sets with subset(X,N) (computed in step 1) where X is the bit from the chosen string and N is the row/column number of the chosen string
If you find an empty set when picking above, there is no solution with the choices so far, so backtrack up the tree to a different choice
This is still EXPTIME, but it is about as fast as you can get it. Since the main time consuming step is the set intersections, using 2N bit binary strings for your set representation is very fast -- for N=32, the sets fit in a 64-bit word and can be intersected with a single AND instruction. It also helps to have a POPCOUNT instruction, since you also need set sizes.
This can be solved as a 0/1 integer linear program with O(N^2) variables and constraints. First there are variables Xij which are 1 if string i is assigned to line j (where j=1 to N are rows and j = (N+1) to 2N are columns). Then there is a variable for each square in the grid, which indicates if the entry is 0 or 1. If the position of the square is (i,j) with variable Yij then the sum of all X variables for line j that correspond to strings that have a 1 in position i is equal to Yij, and the sum of all X variables for line j that correspond to strings that have a 0 in position i is equal to (1 - Yij). And similarly for line i and position j. Finally, the sum of all X variables Xij for each string i (summed over all lines j) is equal to 1.
There has been a lot of research in speeding up solvers for 0/1 integer programming so this may be able to often handle fairly large N (like N=100) for many examples. Also, in some cases, solving the relaxed non-integer linear program and rounding the solution off to 0/1 may produce a valid solution, in polynomial time.
We could choose the first lg 2N rows out of the 2N strings, and then since 2^(lg 2N) = 2N, in a lot of cases there shouldn't be very many ways to assign the N columns so that the prefixes of length lg 2N are respected. Then all the rows are filled in so they can be checked to see if a solution has been found. We can also try assigning more rows in the beginning, and fill in different combinations of rows besides the initial rows. (e.g. we can try filling in contiguous rows starting anywhere in the grid).
Running time for assigning lg 2N rows out of 2N strings is O((2N)^(lg 2N)) = O(2^((lg 2N)^2)), which grows slower than 2^N. Assigning columns to match the prefixes is the part that's the hardest to predict run time. If a prefix occurs K times among the assigned rows, and there are M remaining strings that have the prefix, then the number of assignments for this prefix is M*(M-1)...(M-K+1). The total number of possible column assignments is the product of these terms over all prefixes that occur among the rows. If this gets to be too large, the number of rows initially assigned can be increased. But it's hard to predict the worst-case run time unless an assumption is made like the NxN grid is filled in randomly.
I'm looking for a variation on the Hungarian algorithm (I think) that will pair N people to themselves, excluding self-pairs and reverse-pairs, where N is even.
E.g. given N0 - N6 and a matrix C of costs for each pair, how can I obtain the set of 3 lowest-cost pairs?
C = [ [ - 5 6 1 9 4 ]
[ 5 - 4 8 6 2 ]
[ 6 4 - 3 7 6 ]
[ 1 8 3 - 8 9 ]
[ 9 6 7 8 - 5 ]
[ 4 2 6 9 5 - ] ]
In this example, the resulting pairs would be:
N0, N3
N1, N4
N2, N5
Having typed this out I'm now wondering if I can just increase the cost values in the "bottom half" of the matrix... or even better, remove them.
Is there a variation of Hungarian that works on a non-square matrix?
Or, is there another algorithm that solves this variation of the problem?
Increasing the values of the bottom half can result in an incorrect solution. You can see this as the corner coordinates (in your example coordinates 0,1 and 5,6) of the upper half will always be considered to be in the minimum X pairs, where X is the size of the matrix.
My Solution for finding the minimum X pairs
Take the standard Hungarian algorithm
You can set the diagonal to a value greater than the sum of the elements in the unaltered matrix — this step may allow you to speed up your implementation, depending on how your implementation handles nulls.
1) The first step of the standard algorithm is to go through each row, and then each column, reducing each row and column individually such that the minimum of each row and column is zero. This is unchanged.
The general principle of this solution, is to mirror every subsequent step of the original algorithm around the diagonal.
2) The next step of the algorithm is to select rows and columns so that every zero is included within the selection, using the minimum number of rows and columns.
My alteration to the algorithm means that when selecting a row/column, also select the column/row mirrored around that diagonal, but count it as one row or column selection for all purposes, including counting the diagonal (which will be the intersection of these mirrored row/column selection pairs) as only being selected once.
3) The next step is to check if you have the right solution — which in the standard algorithm means checking if the number of rows and columns selected is equal to the size of the matrix — in your example if six rows and columns have been selected.
For this variation however, when calculating when to end the algorithm treat each row/column mirrored pair of selections as a single row or column selection. If you have the right solution then end the algorithm here.
4) If the number of rows and columns is less than the size of the matrix, then find the smallest unselected element, and call it k. Subtract k from all uncovered elements, and add it to all elements that are covered twice (again, counting the mirrored row/column selection as a single selection).
My alteration of the algorithm means that when altering values, you will alter their mirrored values identically (this should happen naturally as a result of the mirrored selection process).
Then go back to step 2 and repeat steps 2-4 until step 3 indicates the algorithm is finished.
This will result in pairs of mirrored answers (which are the coordinates — to get the value of these coordinates refer back to the original matrix) — you can safely delete half of each pair arbitrarily.
To alter this algorithm to find the minimum R pairs, where R is less than the size of the matrix, reduce the stopping point in step 3 to R. This alteration is essential to answering your question.
As #Niklas B, stated you are solving Weighted perfect matching problem
take a look at this
here is part of document describing Primal-dual algorithm for weighted perfect matching
Please read all and let me know if is useful to you
I need to randomly generate an NxN matrix of integers in the range 1 to K inclusive such that all rows and columns individually have the property that their elements are pairwise distinct.
For example for N=2 and K=3
This is ok:
1 2
2 1
This is not:
1 3
1 2
(Notice that if K < N this is impossible)
When K is sufficiently larger than N an efficient enough algorithm is just to generate a random matrix of 1..K integers, check that each row and each column is pairwise distinct, and if it isn't try again.
But what about the case where K is not much larger than N?
This is not a full answer, but a warning about an intuitive solution that does not work.
I am assuming that by "randomly generate" you mean with uniform probability on all existing such matrices.
For N=2 and K=3, here are the possible matrices, up to permutations of the set [1..K]:
1 2 1 2 1 2
2 1 2 3 3 1
(since we are ignoring permutations of the set [1..K], we can assume wlog that the first line is 1 2).
Now, an intuitive (but incorrect) strategy would be to draw the matrix entries one by one, ensuring for each entry that it is distinct from the other entries on the same line or column.
To see why it's incorrect, consider that we have drawn this:
1 2
x .
and we are now drawing x. x can be 2 or 3, but if we gave each possibility the probability 1/2, then the matrix
1 2
3 1
would get probability 1/2 of being drawn at the end, while it should have only probability 1/3.
Here is a (textual) solution. I don't think it provides good randomness, but nevertherless it could be ok for your application.
Let's generate a matrix in the range [0;K-1] (you will do +1 for all elements if you want to) with the following algorithm:
Generate the first line with any random method you want.
Each number will be the first element of a random sequence calculated in such a manner that you are guarranteed to have no duplicate in subsequent rows, that is for any distinct column x and y, you will have x[i]!=y[i] for all i in [0;N-1].
Compute each row for the previous one.
All the algorithm is based on the random generator with the property I mentioned. With a quick search, I found that the Inversive congruential generator meets this requirement. It seems to be easy to implement. It works if K is prime; if K is not prime, see on the same page 'Compound Inversive Generators'. Maybe it will be a little tricky to handle with perfect squares or cubic numbers (your problem sound like sudoku :-) ), but I think it is possible by creating compound generators with prime factors of K and different parametrization. For all generators, the first element of each column is the seed.
Whatever the value of K, the complexity is only depending on N and is O(N^2).
Deterministically generate a matrix having the desired property for rows and columns. Provided K > N, this can easily be done by starting the ith row with i, and filling in the rest of the row with i+1, i+2, etc., wrapping back to 1 after K. Other algorithms are possible.
Randomly permute columns, then randomly permute rows.
Let's show that permuting rows (i.e. picking up entire rows and assembling a new matrix from them in some order, with each row possibly in a different vertical position) leaves the desired properties intact for both rows and columns, assuming they were true before. The same reasoning then holds for column permutations, and for any sequence of permutations of either kind.
Trivially, permuting rows cannot change the property that, within each row, no element appears more than once.
The effect of permuting rows on a particular column is to reorder the elements within that column. This holds for any column, and since reordering elements cannot produce duplicate elements where there were none before, permuting rows cannot change the property that, within each column, no element appears more than once.
I'm not certain whether this algorithm is capable of generating all possible satisfying matrices, or if it does, whether it will generate all possible satisfying matrices with equal probability. Another interesting question that I don't have an answer for is: How many rounds of row-permutation-then-column-permutation are needed? More precisely, is any finite sequence of row-perm-then-column-perm rounds equivalent to a bounded number of (or in particular, one) row-perm-then-column-perm round? If so then nothing is gained by further permutations after the first row and column permutations. Perhaps someone with a stronger mathematics background can comment. But it may be good enough in any case.
I have encountered an inoridinary problem that given a NxM 0-1 matrix and a number K(<=NxM) and I have to find a minimal subrectangle area of that 0-1 matrix with at least K 1's in inside that subrectangle. Furthermore it's area(the product of both dimensions) should be minimized.
For example:
00000
01010
00100
01010
00000
K = 3
So I can find a subrectangle with minimal area 6 that contains 3 1's inside.
10
01
10
NOTE that the target subrectangle that I mean should contains consecutive numbers of rows and columns from the original 0-1 matrix.
Compute cumulative sum of rows R[i,j] and columns C[i,j].
For top-left corner (i,j) of each possible sub-rectangle:
Starting from a single-row sub-rectangle (n=i),
Search the last possible column for this sub-rectangle (m).
While m>=j:
While there are more than 'k' "ones" in this sub-rectangle:
If this is the smallest sub-rectangle so far, remember it.
Remove column (--m).
This decreases the number of "ones" by C[m+1,n]-C[m+1,j-1].
Add next row (++n).
This increases the number of "ones" by R[m,n]-R[i-1,n].
Time complexity is O(NM(N+M)).
Two nested loops may be optimized by changing linear search to binary search (to process skinny sub-rectangles faster).
Also it is possible (after adding a row/a column to the sub-rectangle) to decrease in O(1) time the number of columns/rows in such a way that the area of this sub-rectangle is not larger than the area of the best-so-far sub-rectangle.
Both these optimizations require calculation of any sub-rectangle weight in O(1). To make it possible, pre-calculate cumulative sum of all elements for sub-rectangles [1..i,1..j] (X[i,j]). Then the weight of any sub-rectangle [i..m,j..n] is computed as X[m,n]-X[i-1,n]-X[m,j-1]+X[i-1,j-1].
Compute cumulative sum of columns C[i,j].
For any starting row (k) of possible sub-rectangle:
For any ending row (l) of possible sub-rectangle:
Starting column (m = 1).
Ending column (n = 1).
While n is not out-of-bounds
While there are less than 'k' "ones" in sub-rectangle [k..l,m..n]:
Add column (++n).
This increases the number of "ones" by C[l,n]-C[k-1,n].
If this is the smallest sub-rectangle so far, remember it.
Remove column (++m).
This decreases the number of "ones" by C[l,m-1]-C[k-1,m-1].
Time complexity is O(N2M).
Loop by 'l' may be terminated when all sub-rectangles, processed inside it, are single-column sub-rectangles (too many rows) or when no sub-rectangles, processed inside it, contain enough "ones" (not enough rows).
The problem is NP-hard because the clique decision problem can be reduced to it. So there is no algorithm that is more efficient than the brute-force approach of trying all the possible submatrices (unless P=NP).
The clique decision problem can be reduced to your problem in the following way:
Let the matrix be the adjacency matrix of the graph.
Set K=L^2, where L is the size of the clique we are looking for.
Solve your problem on this input. The graph contains an L-clique iff the solution to your problem is an LxL submatrix containing only ones (which can be checked in polynomial time).
Off the top of my head, you can make a list of the coordinate pairs(?) of all ones in the matrix, find the (smallest) containing sub-rectangles for each K-combination among them*, then pick the smallest of those.
* which is defined by the smallest and largest row and column indices in the K-combination.