Algorithm to maximize the smallest diagonal element of a matrix - algorithm

Suppose we are given a square matrix A. Our goal is to maximize the smallest diagonal element by row permutations. In other words, for the given matrix A, we have n diagonal elements and thus we have the minimum $min{d_i}$. Our purpose is to reach the matrix with possibly largest minimum diagonal element by row permutations.
This is like $max min{d_i}$ over all row permutations.
For example, suppose A = [4 3 2 1; 1 4 3 2; 2 1 4 3; 2.5 3.5 4.5 1.5]. The diagonal is [4, 4, 4, 1.5]. The minimum of the diagonal is 1.5. We can swap row 3 and 4 to get to a new matrix \tilde_A = [4 3 2 1; 1 4 3 2; 2.5 3.5 4.5 1.5; 2 1 4 3]. The new diagonal is [4, 4, 4.5, 3] with a new minimum 3. And in theory, this is the best result I can obtain because there seems no better option: 3 seems to be the max min{d_i}.
In my problem, n is much larger like 1000. I know there are n! row permutations so I cannot go through each permutation in theory. I know greedy algorithm will help--we start from the first row. If a_11 is not the smallest in the first column, we swap a_11 with the largest element in the first column by row permutation. Then we look at the second row by comparing a_22 with all remaining elements in the second column(except a_12). Swap a_22 if it is not the smallest. ... ... etc. We keep doing this till the last row.
Is there any better algorithm to do it?
This is similar to Minimum Euclidean Matching but they are not the same.

Suppose you wanted to know whether there was a better solution to your problem than 3.
Change your matrix to have a 1 for every element that is strictly greater than 3:
4 3 2 1 1 0 0 0
1 4 3 2 0 1 0 0
2.5 3.5 4.5 1.5 -> 0 1 1 0
2 1 4 3 0 0 1 0
Your problem can be interpreted as trying to find a perfect matching in the bipartite graph which has this binary matrix as its biadjacency graph.
In this case, it is easy to see that there is no way of improving your result because there is no way of reordering rows to make the diagonal entry in the last column greater than 3.
For a larger matrix, there are efficient algorithms to determine maximal matchings in bipartite graphs.
This suggests an algorithm:
Use bisection to find the largest value for which the generated graph has a perfect matching
The assignment corresponding to the perfect matching with the largest value will be equal to the best permutation of rows
EDIT
This Python code illustrates how to use the networkx library to determine whether the graph has a perfect matching for a particular cutoff value.
import networkx as nx
A = [[4,3,2,1],
[1,4,3,2],
[2,1,4,3],
[2.5,3.5,4.5,1.5]]
cutoff = 3
G=nx.DiGraph()
for i,row in enumerate(A):
G.add_edge('start','row'+str(i),capacity=1.0)
G.add_edge('col'+str(i),'end',capacity=1.0)
for j,e in enumerate(row):
if e>cutoff:
G.add_edge('row'+str(i),'col'+str(j),capacity=1.0)
if nx.max_flow(G,'start','end')<len(A):
print 'No perfect matching'
else:
print 'Has a perfect matching'
For a random matrix of size 1000*1000 it takes about 1 second on my computer.

Let $x_{ij}$ be 1 if row i is moved to row j and zero otherwise.
You're interested in the following integer program:
max z
\sum_{i=0}^n x_{ij} = 1 \forall j
\sum_{j=0}^n x_{ij} = 1 \forall i
A[j,j]x_{ij} >= z
Then plug this into GLPK, Gurobi, or CPLEX. Alternatively, solve the IP using your own branch and bound solve.

Related

Sort matrix elements around the diagonal

I am looking for an algorithm that can sort the rows of a matrix so that the elements will cumulate around the diagonal.
I will have a square matrix (around 80 rows/ columns) containing only the values 0 and 1. There are algorithms that sort the rows in a way that most of the elements with the value 1 are below the diagonal.
I need an algorithm that sort to minimize the mean distance of the elements to the diagonal.
Like so:
from:
0 1 0
1 0 1
1 1 0
to:
1 1 0
0 1 0
1 0 1
Since I am not familiar with this topic I hope that someone can help me. I am not looking for a complete solution. The name of such algorithm if it exists or a pseudo code would be sufficient.
Thanks a lot!
There is probably a more efficient way, but you could treat this problem as an assignment problem (trying to assign each row to a diagonal element).
This can be done in three steps:
1) Create a new matrix M where each entry M(i,j) contains the cost of assigning row i of your input matrix to the diagonal element j. For your example this matrix will be the following (average distance to the diagonal element):
1 0 1
1 1 1
1 0.5 1.5
Example: M(0,0) = 1 is the average distance when assigning row 0 of the input matrix (0 1 0) to the diagonal element positioned at 0.
2) Run an algorithm to find the best assignment (e.g., hungarian algorithm). This will give you an optimal 1:1 matching between rows and columns minimizing the sum of cost in the matrix.
The result will be the elements (0,1), (1,2) and (2,0)
3) Rearrange your input matrix using this knowledge. So
row 0 -> row 1
row 1 -> row 2
row 2 -> row 0

How to solve Twisty Movement from Codeforces?

I've read the editorial but it's very short and claims something I don't understand why it's true. Why is it equivalent to finding longest subsequence of 1*2*1*2*?. Can someone explain the solution step by step and justify the claims at every step? http://codeforces.com/contest/934/problem/C
Here is the 'solution' from the editorial, but as I said it's very short and I don't understand it. Hope someone can guide me to the solution step by step justifying the claims along the way, not like in the solution here. Thanks.
Since 1 ≤ ai ≤ 2, it's equivalent to find a longest subsequence like
1 * 2 * 1 * 2 * . By an easy dynamic programming we can find it in
O(n) or O(n2) time. You can see O(n2) solution in the model
solution below. Here we introduce an O(n) approach: Since the
subsequence can be split into 4 parts (11...22...11...22...) , we
can set dp[i][j](i = 1...n, j = 0..3) be the longest subsequence of
a[1...i] with first j parts.
I also think that the cited explanation is not super clear. Here is another take.
You can collapse an original array
1 1 2 2 2 1 1 2 2 1
into a weighted array
2 3 2 2 1
^ ^ ^ ^ ^
1 2 1 2 1
where numbers at the top represent lengths of contiguous strips of repeated values in the original array.
We can convince ourselves that
The optimal flip does not "break up" any contiguous sequences.
The optimal flip starts and ends with different values (i.e. starts with 1 and ends with 2, or starts with 2 and ends with 1).
Hence, the weighted array contains enough information to solve the problem. We want to flip a contiguous slice of the weighted array s.t. the sum of weights associated with some contiguous monotonic sequence is maximized.
Specifically, we want to perform the flip in such a way that some contiguous monotonic sequence 112, 122, 211 or 221 has maximum weight.
One way to do this with dynamic programming is by creating 4 auxiliary arrays.
A[i] : maximal weight of any 1 to the right of i.
B[i] : maximal weight of any 1 to the left of i.
C[i] : maximal weight of any 2 to the right of i.
D[i] : maximal weight of any 2 to the left of i.
Let's assume that if any of A,B,C,D is accessed out of bounds, the returned value is 0.
We initialize x = 0 and do one pass through the array Arr = [1, 2, 1, 2, 1] with weights W = [2, 3, 2, 2, 1]. At each index i, we have 2 cases:
Arr[i:i+2] == 1 2. In this case we set
x = max(x, W[i] + W[i+1] + C[i+1], W[i] + W[i+1] + B[i-1]).
Arr[i:i+2] == 2 1. In this case we set
x = max(x, W[i] + W[i+1] + A[i+1], W[i] + W[i+1] + D[i-1]).
The resulting x is our answer. This is an O(N) solution.

Generate a list of permutations paired with their number of inversions

I'm looking for an algorithm that generates all permutations of a set. To make it easier, the set is always [0, 1..n]. There are many ways to do this and it's not particularly hard.
What I also need is the number of inversions of each permutation.
What is the fastest (in terms of time complexity) algorithm that does this?
I was hoping that there's a way to generate those permutations that produces the number of inversions as a side-effect without adding to the complexity.
The algorithm should generate lists, not arrays, but I'll accept array based ones if it makes a big enough difference in terms of speed.
Plus points (...there are no points...) if it's functional and is implemented in a pure language.
There is Steinhaus–Johnson–Trotter algorithm that allows to keep inversion count easily during permutation generation. Excerpt from Wiki:
Thus, from the single permutation on one element,
1
one may place the number 2 in each possible position in descending
order to form a list of two permutations on two elements,
1 2
2 1
Then, one may place the number 3 in each of three different positions
for these three permutations, in descending order for the first
permutation 1 2, and then in ascending order for the permutation 2 1:
1 2 3
1 3 2
3 1 2
3 2 1
2 3 1
2 1 3
At every step of recursion we insert the biggest number in the list of smaller numbers. It is obvious that this insertion adds M new inversions, where M is insertion position (counting from the right). For example, if we have 3 1 2 list (2 inversions), and will insert 4
3 1 2 4 //position 0, 2 + 0 = 2 inversions
3 1 4 2 //position 1, 2 + 1 = 3 inversions
3 4 1 2 //position 2, 2 + 2 = 4 inversions
4 3 1 2 //position 3, 2 + 3 = 5 inversions
pseudocode:
function Generate(List, Count)
N = List.Length
if N = N_Max then
Output(List, 'InvCount = ': Count)
else
for Position = 0 to N do
Generate(List.Insert(N, N - Position), Count + Position)
P.S. Recursive method is not mandatory here, but I suspect that it is natural for functional guys
P.P.S If you are worried about inserting into lists, consider Even's speedup section that uses only exchange of neighbour elements, and every exchange increments or decrements inversion count by 1.
Here is an algorithm that does the task, is amortized O(1) per permutation, and generates an array of tuples of linked lists that share as much memory as they reasonably can.
I'll implement all except the linked list bit in untested Python. Though Python would be a bad language for a real implementation.
def permutations (sorted_list):
answer = []
def add_permutations(reversed_sublist, tail_node, inversions):
if (0 == len(sorted_sublist)):
answer.append((tail_node, inversions))
else:
for idx, val in enumerate(reversed_sublist):
add_permutations(
filter(lambda x: x != val),
ListNode(val, tail_node,
inversions + idx
)
add_permutations(reversed(sorted_list), EmptyListNode(), 0)
return answer
You might wonder at my claim of amortized O(1) work with all of this copying. That's because if m elements are left we do O(m) work then amortize it over m! elements. So the amortized cost of the higher level nodes is a converging cost per bottom call, of which we need one per permutation.

Constrained maximization of the sum of square submatrices

I have an intensity map of an image that I would like to select sub-regions with large average value. To do this, I want to find the sub-regions which maximize the sum of the intensity map pixels covered by the sub-regions. To prevent an excessive number of returned sub-regions, a penalty is applied for each additional sub-region returned. Additionally, it is fine if two sub-regions overlap, but the overlap objective value is only that of the union of the sub-regions.
More formally, suppose you have a matrix A containing non-negative values with dimensions m x n. You would like to cover the matrix with square sub-matrices with dimension s x s such that the sum of the values of A covered by the union of the area of the squares is maximized. For each square you add to the solution, a constant penalty p is subtracted from the objective value of the solution.
For instance, consider the following matrix:
0 0 0 0 0 0
0 1 2 2 1 0
0 1 2 2 2 0
0 0 0 0 0 0
0 3 0 0 0 0
with parameters p = -4 and s = 2. The optimal solution is the two squares S1 = [1, 2; 1, 2] and S2 = [2, 1; 2, 2] with coordinates (2:3,2:3) and (2:3,4:5) respectively (in Matlab notation). Note that in this example that the greedy approach of incrementally adding the squares with maximum value until no squares can be added (without decreasing the objective value) fails.
One brute force way of solving it would be to check all possible combinations using exactly k squares. Starting from k =1, you would compute the optimal combination with exactly k squares, increment k and repeat until the objective value stops increasing. This is clearly very expensive.
You can precompute the sums of values of the (m-s+1)*(n-s+1) possible squares in time O(mn) using an integral image.
Is there an efficient solution to this?
The problem is NP-Hard. This could be proven by reduction from planar minimum vertex cover. Proof for special case s=3, p=2, and A having only values 0 or 1 is identical to the proof for other SO question.
As for brute force solution, it could be made more efficient if instead of trying all combinations with increasing k, you add squares incrementally. When objective value of partial solution plus sum of not-yet-covered values is not greater than best-so-far objective value, rollback to last valid combination by removing recently added square(s) and try other squares. Avoid adding squares that add zero to objective value. Also avoid adding sub-optimal squares: if in example from OP partial solution contains square [1, 2; 1, 2], do not add square [2, 2; 2, 2] because [2, 1; 2, 2] would always be at least as good or even better. And reorder the squares in such a way that you quickly get good enough solution, this allows to terminate all further attempts sooner.

Algorithm puzzle interview

I found this interview question, and I couldn't come up with an algorithm better than O(N^2 * P):
Given a vector of P natural numbers (1,2,3,...,P) and another vector of length N whose elements are from the first vector, find the longest subsequence in the second vector, such that all elements are uniformly distributed (have the same frequency).
Example : (1,2,3) and (1,2,1,3,2,1,3,1,2,3,1). The longest subsequence is in the interval [2,10], because it contains all the elements from the first sequence with the same frequency (1 appears three times, 2 three times, and 3 three times).
The time complexity should be O(N * P).
"Subsequence" usually means noncontiguous. I'm going to assume that you meant "sublist".
Here's an O(N P) algorithm assuming we can hash (assumption not needed; we can radix sort instead). Scan the array keeping a running total for each number. For your example,
1 2 3
--------
0 0 0
1
1 0 0
2
1 1 0
1
2 1 0
3
2 1 1
2
2 2 1
1
3 2 1
3
3 2 2
1
4 2 2
2
4 3 2
3
4 3 3
1
5 3 3
Now, normalize each row by subtracting the minimum element. The result is
0: 000
1: 100
2: 110
3: 210
4: 100
5: 110
6: 210
7: 100
8: 200
9: 210
10: 100
11: 200.
Prepare two hashes, mapping each row to the first index at which it appears and the last index at which it appears. Iterate through the keys and take the one with maximum last - first.
000: first is at 0, last is at 0
100: first is at 1, last is at 10
110: first is at 2, last is at 5
210: first is at 3, last is at 9
200: first is at 8, last is at 11
The best key is 100, since its sublist has length 9. The sublist is the (1+1)th element to the 10th.
This works because a sublist is balanced if and only if its first and last unnormalized histograms are the same up to adding a constant, which occurs if and only if the first and last normalized histograms are identical.
If the memory usage is not important, it's easy...
You can give the matrix dimensions N*p and save in column (i) the value corresponding to how many elements p is looking between (i) first element in the second vector...
After completing the matrix, you can search for column i that all of the elements in column i are not different. The maximum i is the answer.
With randomization, you can get it down to linear time. The idea is to replace each of the P values with a random integer, such that those integers sum to zero. Now look for two prefix sums that are equal. This allows some small chance of false positives, which we could remedy by checking our output.
In Python 2.7:
# input:
vec1 = [1, 2, 3]
P = len(vec1)
vec2 = [1, 2, 1, 3, 2, 1, 3, 1, 2, 3, 1]
N = len(vec2)
# Choose big enough integer B. For each k in vec1, choose
# a random mod-B remainder r[k], so their mod-B sum is 0.
# Any P-1 of these remainders are independent.
import random
B = N*N*N
r = dict((k, random.randint(0,B-1)) for k in vec1)
s = sum(r.values())%B
r[vec1[0]] = (r[vec1[0]]+B-s)%B
assert sum(r.values())%B == 0
# For 0<=i<=N, let vec3[i] be mod-B sum of r[vec2[j]], for j<i.
vec3 = [0] * (N+1)
for i in range(1,N+1):
vec3[i] = (vec3[i-1] + r[vec2[i-1]]) % B
# Find pair (i,j) so vec3[i]==vec3[j], and j-i is as large as possible.
# This is either a solution (subsequence vec2[i:j] is uniform) or a false
# positive. The expected number of false positives is < N*N/(2*B) < 1/N.
(i, j)=(0, 0)
first = {}
for k in range(N+1):
v = vec3[k]
if v in first:
if k-first[v] > j-i:
(i, j) = (first[v], k)
else:
first[v] = k
# output:
print "Found subsequence from", i, "(inclusive) to", j, "(exclusive):"
print vec2[i:j]
print "This is either uniform, or rarely, it is a false positive."
Here is an observation: you can't get a uniformly distributed sequence that is not a multiplication of P in length. This implies that you only have to check the sub-sequences of N that are P, 2P, 3P... long - (N/P)^2 such sequences.
You can get this down to O(N) time, with no dependence on P by enhancing uty's solution.
For each row, instead of storing the normalized counts of each element, store a hash of the normalized counts while only keeping the normalized counts for the current index. During each iteration, you need to first update the normalized counts, which has an amortized cost of O(1) if each decrement of a count is paid for when it is incremented. Next you recompute the hash. The key here is that the hash needs to be easily updatable following an increment or decrement of one of the elements of the tuple that is being hashed.
At least one way of doing this hashing efficiently, with good theoretical independence guarantees is shown in the answer to this question. Note that the O(lg P) cost for computing the exponential to determine the amount to add to the hash can be eliminated by precomputing the exponentials modulo the prime in with a total running time of O(P) for the precomputation, giving a total running time of O(N + P) = O(N).

Resources