Determine if some row permutation of a matrix is Toeplitz - algorithm

A Toeplitz matrix "is a matrix in which each descending diagonal from left to right is constant." Given a binary matrix M, is there an efficient algorithm to determine if there is a permutation of the rows which makes it Toeplitz?
For example, set
M= [0 1 1]
[1 1 0]
[1 0 1]
If you swap the first and second row you get
[1 1 0]
[0 1 1]
[1 0 1]
which is Toeplitz.
In python you can make a random binary matrix as follows.
n = 10
h = 10
M = np.random.randint(2, size=(h,n))
I would like to apply the test to M.
(Note the matrix M does not need to be square.)

This problem can be solved in linear O(h*w) time, where h is number of rows and w is number of columns.
Construct a graph where each vertex corresponds to (w-1)-length substring which may be either prefix or suffix of some row in the matrix. One vertex may correspond to several duplicate substrings. Connect these vertexes with h edges. Each edge corresponds to row of the matrix. It is directed from the vertex corresponding to this row's prefix to the vertex corresponding to this row's suffix.
To determine if some row permutation is a Toeplitz matrix, it is enough to check if constructed graph is Eulerian graph. To find permutation itself, it is enough to find Eulerian path in this graph.
We need some efficient way to interconnect vertexes and edges. Straightforward approach assumes comparing each row-substring pair. This is not very interesting because of O(h2*w) time complexity.
Building Generalized suffix tree (or suffix array) for rows of the matrix needs only O(h*w) time. And this tree allows to interconnect vertexes and edges also in linear time: each internal node with depth w-1 represents some (w-1)-length substring (vertex); each leaf attached to this node represents some row's suffix (incoming edge); and each leaf attached to this node's children represents some row containing this substring as a prefix (outgoing edge).
Other alternative is to use hash map. With (w-1)-length substring of matrix row as a key and pair of lists of row indexes (for rows where this substring is prefix/suffix) as a value. Comparing to suffix tree/array approach, this allows simpler implementation, needs less memory (each key needs only space for hash value and pointer to beginning of the substring), should work faster (on average), but has inferior worst-case complexity: O(h2*w).

One simple-minded approach that would work for small matrices is:
Sort the rows of M
For each choice of start row
For each choice of end row
construct a Toeplitz matrix T from the given start and end row
Sort the rows of T and compare to M
If you find a match then T is a permutation of M that is Toeplitz
This is based on the fact that a Toeplitz matrix is uniquely defined once you know the start and end rows.
However, this approach is not particularly efficient.
Example Python Code
M= [[0, 1, 1],
[1, 1, 0],
[1, 0, 1]]
n=len(M)
M2 = sorted(M)
for start in M2:
for end in M2:
v = end+start[1:]
T = [v[s:s+n] for s in range(n-1,-1,-1)]
if sorted(T)==M2:
print 'Found Toeplitz representation'
print T
prints
Found Toeplitz representation
[[0, 1, 1],
[1, 0, 1],
[1, 1, 0]]
Found Toeplitz representation
[[1, 0, 1],
[1, 1, 0],
[0, 1, 1]]
Found Toeplitz representation
[[1, 1, 0],
[0, 1, 1],
[1, 0, 1]]

You can conduct a pre-preliminary check for elimination condition:
Find out the column-wise sum of all the columns of the matrix.
Now in any permutation of rows, the values in the columns shall stay in the same column.
So the difference between the sum of any two neighbouring columns should be at the maximum 1.
Also, if i and i+1 are two neighbouring columns, then:
If sum(i+1) = sum(i) + 1, then we know that bottom-most element in column i should be 0 and top-most element in column (i+1) should be 1.
If sum(i+1) = sum(i) - 1, then we know that bottom-most element in column i should be 1 and top-most element in column (i+1) should be 0.
If sum(i+1) = sum(i), then we know that bottom-most element in column i should be equal to top-most element in column (i+1).
You can also conduct a similar check by summing the rows and see if there is any permutation in which the difference between sum of any two neighbouring rows is at most one.
Ofcourse, you will still have to conduct some combinatorial search, but the above filter may reduce the search scenarios.
This is because you now have to search for a pair of (candidate top and bottom) rows that satisfies the above 3 conditions for each pair of neighbouring columns.
Also, this optimization shall not be very helpful if the number of rows is much larger than the number of columns.

Related

Graph problem for checking whether a permutation A of 1 to N number can be converted to a permutation B if we are given good pairs

I am not sure whether this is an appropriate question to post on this platform.
The problem statement is present here:
https://www.interviewbit.com/problems/permutation-swaps/
Other than that, here's a short summary of this graph-based question.
Problem statement with an example:
let N = 4
given permutation A for numbers 1 to 4,
A = [1, 3, 2, 4]
and given,
B = [1, 4, 2, 3]
other than that we are given pairs of positions,
C = [[2, 4]]
We have to find out, whether we can convert permutation A to permutation B if only numbers at pairs of positions(good pairs) mentioned in C can be swapped.
Here, in the above example for pair (2, 4) in C, the number at position 2 in A is = 3 and the number at position 4 is 4. if we, swap the numbers at both these positions, we get [1, 4, 2, 3] which is equal to B. Hence, it is possible.
Your really need to ask an actual question!
Here is a simple algorithm to strt your thinking about what you want.
Add A to L, a list of reachable permutations
Loop P over permutations in L
Loop S over C
Apply C to P, giving R
If R == B
**DONE** TRUE
If R not in L
Add R to L
**DONE** FALSE
Algorithm:
Create a graph whose nodes are the positions 1..n, and with an edge between two positions if and only if you can swap them.
For every position where you have an unwanted number, if the number you want can't be reached through the graph, then no permutation can work and you should return 0.
If every position has its wanted number reachable, then return 1.
But why is it sufficient to have every wanted number reachable?
To start, construct a maximal spanning forest in the graph according to the following pseudo-code:
foreach node in 1..10:
if node not in any tree:
construct maximal tree by breadth-first search
if tree has > 1 element:
add tree to list of trees in our forest
And now we construct our permutation as follows:
while the forest is non-empty:
pick a tree in the forest:
find a leaf in the tree:
With a chain of swaps, get the right value into that node
remove the leaf from the tree
if the tree has only 1 element left, remove the tree from the forest
I will omit the proof that this always produces a permutation that gets to the desired end result. But it does.

Use FFT to find all possible fixed-size subset sums

I need to solve the following problem: given an integer sequence x of size N, and a subset size k, find all the possible subset sums. A subset sum is the sum of elements in the subset.
If elements in x are allowed to appear many times (up to k of course) in a subset (sub-multiset), this problem has a pseudo polynomial time solution via FFT. Here is an example:
x = [0, 1, 2, 3, 6]
k = 4
xFrequency = [1, 1, 1, 1, 0, 0, 1] # On the support of [0, 1, 2, 3, 4, 5, 6]
sumFrequency = selfConvolve(xFrequency, times = 4) # A fast approach is to simply raise the power of the Fourier series.
sumFrequency > 0 # Gives a boolean vector indicating all possible size-k subset sums.
But what can be done if an element cannot show up multiple times in a subset?
I came up with the following method but am unsure of its correctness. The idea is to first find the frequencies of sums that are produced by adding at least 2 identical elements:
y = [0, 2, 4, 6, 12] # = [0, 1, 2, 3, 6] + [0, 1, 2, 3, 6]
yFrequency = [0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1]
sumFrequencyWithRedundancy = convolve(yFrequency, x, x)
My reasoning is that since y represents all possible sums of 2 identical elements, then every sum in y + x + x is guaranteed to have been produced by adding at least 2 identical elements. Finally
sumFrequencyNoRedundancy = sumFrequency - sumFrequencyWithRedundancy
sumFrequencyNoRedundancy > 0
Any mistake or any other established method for solving the problem?
Thanks!
Edits:
After some tests, it does not work. There turns out to be much more combinations that should be excluded from sumFrequency besides sumFrequencyWithRedundancy, and the combinatoric analyses seem to escalate rapidly with k, eventually making it less efficient than brute-force summation.
My motivation was to find all possible sample sums given sampling without replacement and the sample size. Then I came across the idea of solving the standard subset sum problem via FFT --- free subset size and the qualified subsets themselves unneeded. The reference materials can be easily found online, basically a divide and conquer approach:
Divide the superset into 2 sets, left and right.
Compute all possible subset sums in the left and right sets. The sums are represented by 2 boolean vectors.
Convolve the 2 boolean vectors.
Find if the target sum is indicated in the final boolean vector.
You can see why the algorithm works for the standard subset sum problem.
If anyone can let me know some work on how to find all possible size-k subset sums, I would really appreciate it!
Given k and the n-element array x, it suffices to evaluate the degree-k coefficient in z of the polynomial
n x[i]
product (1 + y z).
i=1
This coefficient is a polynomial in y where the exponents with nonzero coefficients indicate the sums that can be formed using exactly k distinct terms.
One strategy is to split x with reasonably balanced sums, evaluate each half mod z^(k+1), and then multiply using the school algorithm for the outer multiplications and FFT (or whatever) for the inner. This should end up costing roughly O(k^2 S log^2 S).
The idea for evaluating elementary symmetric polynomials efficiently is due to Ben-Or.

Is there any algorithm to address the longest common subsequence problem with different weights for each character?

I'm looking for an algorithm that addresses the LCS problem for two strings with the following conditions:
Each string consists of English characters and each character has a weight. For example:
sequence 1 (S1): "ABBCD" with weights [1, 2, 4, 1, 3]
sequence 2 (S2): "TBDC" with weights [7, 5, 1, 2]
Suppose that MW(s, S) is defined as the maximum weight of the sub-sequence s in string S with respect to the associated weights. The heaviest common sub-sequence (HCS) is defined as:
HCS = argmin(MW(s, S1), MW(s, S2))
The algorithm output should be the indexes of HCS in both strings and the weight. In this case, the indexes will be:
I_S1 = [2, 4] --> MW("BD", "ABBCD") = 7
I_S2 = [1, 2] --> MW("BD", "TBDC") = 6
Therefore HCS = "BD", and weight = min(MW(s, S1), MW(s, S2)) = 6.
The table that you need to build will have this.
for each position in sequence 1
for each position in sequence 2
for each extreme pair of (weight1, weight2)
(last_position1, last_position2)
Where an extreme pair is one where it is not possible to find a subsequence to that point whose weights in sequence 1 and weights in sequence 2 are both >= and at least one is >.
There may be multiple extreme pairs, where one sequence is higher than the other.
The rule is that at the (i, -1) or (-1, j) positions, the only extreme pair is the empty set with weight 0. At any other we merge the extreme pairs for (i-1, j) and (i, j-1). And then if seq1[i] = seq2[j], then add the options where you went to (i-1, j-1) and then included the i and j in the respective subsequences. (So add weight1[i] and weight2[j] to the weights then do a merge.)
For that merge you can sort by weight1 ascending, all of the extreme values for both previous points, then throw away all of the ones whose weight2 is less than or equal to the best weight2 that was already posted earlier in the sequence.
When you reach the end you can find the extreme pair with the highest min, and that is your answer. You can then walk the data structure back to find the subsequences in question.

Summing a given series of numbers in order to reset the summation as many times as possible algorithm

I'm looking for an efficient algorithm (not necessarily a code) for solving the following question:
Given n positive and negative numbers that sum up to zero, we would like to find a starting index that will cause the cumulated sum to zero up as many times as possible.
It doesn't have to be in a specific manner, but the importance here is the efficincy- we want the algorithm/idea to be able to this in less then a qudratic "time complexity"
An example:
Given the numbers: 2, -1, 3, 1, -3, -2:
If we strat summing up with 2 (first index), the sum will be zero only once (at the end of the summation), but strting with -1 will yield zero twice during the summation.
The given numbers may have more than one "best index", but we would like to find at least one of these indexes.
I've tried doing it with binary search, but didn't make much progress- so any hints/help will be appreciated.
You can compute prefix sums. In terms of prefix sums, zeros are positions that have the same value of a prefix sum as the start position. So the problem is reduced to finding the most frequent element in the array of prefix sums. It can be solved efficiently using sorting or hash tables.
Here is an example:
Input: {2, -1, 3, 1, -3, 2}
Prefix sums: {0, 2, 1, 4, 5, 2, 0}
The most frequent element is 2. The first occurrence of 2 is in the first position. Thus, starting from the second element yields optimal answer.

find kth smallest number in O(logn) time

Here is the problem, an unsorted array a[n], and I need to find the kth smallest number in range [i, j], and absolutely 1<=i<=j<=n, k<=j-i+1.
Typically I will use quick-find to do the job, but it is not fast enough if there many query requests with different range [i, j], I hardly to figure out a algorithm to do the query in O(logn) time (preprocessing is allowed).
Any idea is appreciated.
PS
Let me make the problem easier to understand. Any kinds of preprocessing is allowed, but the query needs to be done in O(logn) time. And there will be many (more than 1) queries, like find the 1st in range [3,7], or 3rd in range [10,17], or 11th in range [33, 52].
By range [i, j] I mean in the original array, not sorted or something.
For example, a[5] = {3,1,7,5,9}, query 1st in range [3,4] is 5, 2nd in range [1,3] is 5, 3rd in range [0,2] is 7.
If pre-processing is allowed and not counted towards the time complexity, just use that to construct sub-lists so that you can efficiently find the element you're looking for. As with most optimisations, this trades space for time.
Your pre-processing step is to take your original list of n numbers and create a number of new sublists.
Each of these sublists is a portion of the original, starting with the nth element, extending for m elements and then sorted. So your original list of:
{3, 1, 7, 5, 9}
gives you:
list[0][0] = {3}
list[0][1] = {1, 3}
list[0][2] = {1, 3, 7}
list[0][3] = {1, 3, 5, 7}
list[0][4] = {1, 3, 5, 7, 9}
list[1][0] = {1}
list[1][1] = {1, 7}
list[1][2] = {1, 5, 7}
list[1][3] = {1, 5, 7, 9}
list[2][0] = {7}
list[2][1] = {5, 7}
list[2][2] = {5, 7, 9}
list[3][0] = {5}
list[3][1] = {5,9}
list[4][0] = {9}
This isn't a cheap operation (in time or space) so you may want to maintain a "dirty" flag on the list so you only perform it the first time after you do an modifying operation (insert, delete, change).
In fact, you can use lazy evaluation for even more efficiency. Basically set all sublists to an empty list when you start and whenever you perform a modifying operation. Then, whenever you attempt to access a sublist and it's empty, calculate that sublist (and that one only) before trying to get the kth value out of it.
That ensures sublists are evaluated only when needed and cached to prevent unnecessary recalculation. For example, if you never ask for a value from the 3-through-6 sublist, it's never calculated.
The pseudo-code for creating all the sublists is basically (for loops inclusive at both ends):
for n = 0 to a.lastindex:
create array list[n]
for m = 0 to a.lastindex - n
create array list[n][m]
for i = 0 to m:
list[n][m][i] = a[n+i]
sort list[n][m]
The code for lazy evaluation is a little more complex (but only a little), so I won't provide pseudo-code for that.
Then, in order to find the kth smallest number in the range i through j (where i and j are the original indexes), you simply look up lists[i][j-i][k-1], a very fast O(1) operation:
+--------------------------+
| |
| v
1st in range [3,4] (values 5,9), list[3][4-3=1][1-1-0] = 5
2nd in range [1,3] (values 1,7,5), list[1][3-1=2][2-1=1] = 5
3rd in range [0,2] (values 3,1,7), list[0][2-0=2][3-1=2] = 7
| | ^ ^ ^
| | | | |
| +-------------------------+----+ |
| |
+-------------------------------------------------+
Here's some Python code which shows this in action:
orig = [3,1,7,5,9]
print orig
print "====="
list = []
for n in range (len(orig)):
list.append([])
for m in range (len(orig) - n):
list[-1].append([])
for i in range (m+1):
list[-1][-1].append(orig[n+i])
list[-1][-1] = sorted(list[-1][-1])
print "(%d,%d)=%s"%(n,m,list[-1][-1])
print "====="
# Gives xth smallest in index range y through z inclusive.
x = 1; y = 3; z = 4; print "(%d,%d,%d)=%d"%(x,y,z,list[y][z-y][x-1])
x = 2; y = 1; z = 3; print "(%d,%d,%d)=%d"%(x,y,z,list[y][z-y][x-1])
x = 3; y = 0; z = 2; print "(%d,%d,%d)=%d"%(x,y,z,list[y][z-y][x-1])
print "====="
As expected, the output is:
[3, 1, 7, 5, 9]
=====
(0,0)=[3]
(0,1)=[1, 3]
(0,2)=[1, 3, 7]
(0,3)=[1, 3, 5, 7]
(0,4)=[1, 3, 5, 7, 9]
(1,0)=[1]
(1,1)=[1, 7]
(1,2)=[1, 5, 7]
(1,3)=[1, 5, 7, 9]
(2,0)=[7]
(2,1)=[5, 7]
(2,2)=[5, 7, 9]
(3,0)=[5]
(3,1)=[5, 9]
(4,0)=[9]
=====
(1,3,4)=5
(2,1,3)=5
(3,0,2)=7
=====
Current solution is O( (logn)^2 ). I am pretty sure it can be modified to run on O(logn). The main advantage of this algorithm over paxdiablo's algorithm is space efficiency. This algorithm needs O(nlogn) space, not O(n^2) space.
First, the complexity of finding kth smallest element from two sorted arrays of length m and n is O(logm + logn). Complexity of finding kth smallest element from arrays of lengths a,b,c,d.. is O(loga+logb+.....).
Now, sort the whole array and store it. Sort the first half and second half of the array and store it and so on. You will have 1 sorted array of length n, 2 sorted of arrays of length n/2, 4 sorted arrays of length n/4 and so on. Total memory required = 1*n+2*n/2+4*n/4+8*n/8...= nlogn.
Once you have i and j figure out the list of of subarrays which, when concatenated, give you range [i,j]. There are going to be logn number of arrays. Finding kth smallest number among them would take O( (logn)^2) time.
Example for the last paragraph:
Assume the array is of size 8 (indexed from 0 to 7). You have the following sorted lists:
A:0-7, B:0-3, C:4-7, D:0-1, E:2-3, F:4-5, G:6-7.
Now construct a tree with pointers to these arrays such that every node contains its immediate constituents. A will be root, B and C are its children and so on.
Now implement a recursive function that returns a list of arrays.
def getArrays(node, i, j):
if i==node.min and j==node.max:
return [node];
if i<=node.left.max:
if j<=node.left.max:
return [getArrays(node.left, i, j)]; # (i,j) is located within left node
else:
return [ getArrays(node.left, i, node.left.max), getArrays(node.right, node.right.min, j) ]; # (i,j) is spread over left and right node
else:
return [getArrays(node.right, i, j)]; # (i,j) is located within right node
Preprocess: Make an nxn array where the [k][r] element is the kth smallest element of the first r elements (1-indexed for convenience).
Then, given some particular range [i,j] and value for k, do the following:
Find the element at the [k][j] slot of the matrix; call this x.
go down the i-1 column of your matrix and find how many values in it are smaller than or equal to x (treat column 0 as having 0 smaller entries). By construction, this column will be sorted (all columns will be sorted), so it can be found in log time. Call this value s
Find the element in the [k+s][j] slot of the matrix. This is your answer.
E.g., given 3 1 7 5 9
3 1 1 1 1
X 3 3 3 3
X X 7 5 5
X X X 7 7
X X X X 9
Now, if we're asked for the 2nd smallest in [2,4] range (again, 1-indexing), I first find the 2nd smallest in [1,4] range which is 3. I then look at column 1 and see that there is 1 element less than or equal to 3. Finally, I find the 3rd smallest in [1,4] range at [3][5] slot which is 5, as desired.
This takes n^2 space, and log(n) lookup time.
This one does not require pre-process but is somehow slower than O(logN). It's significantly faster than a naive iterate&count, and could support dynamic modification on the sequence.
It goes like this. Suppose the length n has n=2^x for some x. Construct a segment-tree whose root node represent [0,n-1]. For each of the node, if it represent a node [a,b], b>a, let it has two child nodes each representing [a,(a+b)/2], [(a+b)/2+1,b]. (That is, do a recursive divide-by-two).
Then, on each node, maintain a separate binary search tree for the numbers within that segment. Therefore, each modification on the sequence takes O(logN)[on the segement]*O(logN)[on the BST]. Queries can be done like this, Let Q(a,b,x) be rank of x within segment [a,b]. Obviously, if Q(a,b,x) can be computed efficiently, a binary search on x can compute the answer desired effectively (with an extra O(logE) factor.
Q(a,b,x) can be computed as: find smallest number of segments that make up [a,b], which can be done in O(logN) on the segment tree. For each segment, query on the binary search tree for that segment for the number of elements less than x. Add all these numbers to get Q(a,b,x).
This should be O(logN*logE*logN). Well not exactly what you have asked for though.
In O(log n) time it's not possible to read all of the elements of the array. Since it's not sorted, and there's no other provided information, this is impossible.
There's no way you can do better than O(n) in both worst and average case. You have to look at every single element.

Resources