APL find frequency of elements in a matrix - matrix

I have this piece of code
((⍳3)∘.+(⍳2))
which generates the following matrix
2 3
3 4
4 5
I want to find the occurrence of each unique element in the result i.e occurrence of 2,3,4,5 in the result.
I tried using "∘.=" with the matrix itself and then reshaping such that elements of each sub matrix is transformed into a row
using
6 6⍴ ((⍳3)∘.+(⍳2))∘.=((⍳3)∘.+(⍳2))
which gives the following result
1 0 0 0 0 0 for 2
0 1 1 0 0 0 for 3
0 1 1 0 0 0 for 3
0 0 0 1 1 0 for 4
0 0 0 1 1 0 for 4
0 0 0 0 0 1 for 5
as you can see it still contains the sum for duplicate items, and I'm lost as of now.
Any help will be appreciated.

You should do ∘.= between the unique elements in the matrix and a flat vector of all elements, like:
m ← ((⍳3)∘.+(⍳2))
(∪,m) ∘.= ,m
1 0 0 0 0 0
0 1 1 0 0 0
0 0 0 1 1 0
0 0 0 0 0 1
Then just do +/ on it to get the frequencies of ∪,m
+/ (∪,m) ∘.= ,m
1 2 2 1
∪,m
2 3 4 5
(Tested on GNU APL.)

Dyalog APL version 14.0 has the ⌸ Key operator exactly for this, you just need to ravel your data:
{≢⍵}⌸ ,((⍳3)∘.+(⍳2))
1 2 2 1
Try it online!
You can even use the left argument of ⌸'s operand function to create a table:
{⍺,≢⍵}⌸ ,((⍳3)∘.+(⍳2))
2 1
3 2
4 2
5 1
Try it online!

Related

how to find number of paths between 2 nodes of a certain length [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
given G graph ' and a matrix of nodes , how can i find number of paths between 2 given nodes of a certain length ?
i've thought of multiping the matrix k times and then find the Ak[i,j] but i don't know to build the algorithm , or is it the best solution when it comes to complexity ?
If you want to find all the paths between two nodes of length k, just multiply the adjacency matrix by itself k times.
The reason for this is simple:
If there is an edge ij and an edge js, then there will be a path is through j. The entries ii are the degrees of the nodes i.
Here is an adjacency matrix for a graph:
0 1 1 0 0 0 0 0 0 0
0 1 1 0 0 1 0 0 0 0
0 0 0 1 1 0 0 0 0 0
1 0 0 1 1 0 0 0 0 0
0 0 0 0 1 1 1 0 1 0
1 0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 1 1 0
0 0 0 0 0 0 0 1 0 1
0 0 0 0 0 0 0 1 0 1
0 0 0 1 0 0 0 0 0 1
Let's say we want to find the number of length 3 paths between Nodes 2 and 5. For this we need to find A_3[2, 5].
There are plenty of algorithms for matrix multiplication, and certain languages have these built in.
So if our adjancency matrix is called A, we want A * A * A.
This gives us:
2 1 1 2 3 2 1 1 1 0
2 2 2 2 3 2 1 2 1 1
2 1 1 1 3 2 3 3 3 1
2 2 2 2 4 3 3 3 3 1
1 1 1 1 1 1 3 8 3 6
0 1 1 2 1 1 0 1 0 2
0 0 0 2 0 0 1 5 1 6
1 0 0 3 1 0 0 1 0 3
1 0 0 3 1 0 0 1 0 3
2 1 1 3 3 1 1 0 1 1
When we find A_3[2, 5] we are given 2, which is the number of length 3 paths between the 2 nodes.

Sorting rows and columns of adjacency matrix to reveal cliques

I'm looking for a reordering technique to group connected components of an adjacency matrix together.
For example, I've made an illustration with two groups, blue and green. Initially the '1's entries are distributed across the rows and columns of the matrix. By reordering the rows and columns, all '1''s can be located in two contiguous sections of the matrix, revealing the blue and green components more clearly.
I can't remember what this reordering technique is called. I've searched for many combinations of adjacency matrix, clique, sorting, and reordering.
The closest hits I've found are
symrcm moves the elements closer to the diagonal, but does not make groups.
Is there a way to reorder the rows and columns of matrix to create a dense corner, in R? which focuses on removing completely empty rows and columns
Please either provide the common name for this technique so that I can google more effectively, or point me in the direction of a Matlab function.
I don't know whether there is a better alternative which should give you direct results, but here is one approach which may serve your purpose.
Your input:
>> A
A =
0 1 1 0 1
1 0 0 1 0
0 1 1 0 1
1 0 0 1 0
0 1 1 0 1
Method 1
Taking first row and first column as Column-Mask(maskCol) and
Row-Mask(maskRow) respectively.
Get the mask of which values contains ones in both first row, and first column
maskRow = A(:,1)==1;
maskCol = A(1,:)~=1;
Rearrange the Rows (according to the Row-mask)
out = [A(maskRow,:);A(~maskRow,:)];
Gives something like this:
out =
1 0 0 1 0
1 0 0 1 0
0 1 1 0 1
0 1 1 0 1
0 1 1 0 1
Rearrange columns (according to the column-mask)
out = [out(:,maskCol),out(:,~maskCol)]
Gives the desired results:
out =
1 1 0 0 0
1 1 0 0 0
0 0 1 1 1
0 0 1 1 1
0 0 1 1 1
Just a check whether the indices are where they are supposed to be or if you want the corresponding re-arranged indices ;)
Before Re-arranging:
idx = reshape(1:25,5,[])
idx =
1 6 11 16 21
2 7 12 17 22
3 8 13 18 23
4 9 14 19 24
5 10 15 20 25
After re-arranging (same process we did before)
outidx = [idx(maskRow,:);idx(~maskRow,:)];
outidx = [outidx(:,maskCol),outidx(:,~maskCol)]
Output:
outidx =
2 17 7 12 22
4 19 9 14 24
1 16 6 11 21
3 18 8 13 23
5 20 10 15 25
Method 2
For Generic case, if you don't know the matrix beforehand, here is the procedure to find the maskRow and maskCol
Logic used:
Take first row. Consider it as column mask (maskCol).
For 2nd row to last row, the following process are repeated.
Compare the current row with maskCol.
If any one value matches with the maskCol, then find the element
wise logical OR and update it as new maskCol
Repeat this process till the last row.
Same process for finding maskRow while the column are used for
iterations instead.
Code:
%// If you have a square matrix, you can combine both these loops into a single loop.
maskCol = A(1,:);
for ii = 2:size(A,1)
if sum(A(ii,:) & maskCol)>0
maskCol = maskCol | A(ii,:);
end
end
maskCol = ~maskCol;
maskRow = A(:,1);
for ii = 2:size(A,2)
if sum(A(:,ii) & maskRow)>0
maskRow = maskRow | A(:,ii);
end
end
Here is an example to try that:
%// Here I removed some 'ones' from first, last rows and columns.
%// Compare it with the original example.
A = [0 0 1 0 1
0 0 0 1 0
0 1 1 0 0
1 0 0 1 0
0 1 0 0 1];
Then, repeat the procedure you followed before:
out = [A(maskRow,:);A(~maskRow,:)]; %// same code used
out = [out(:,maskCol),out(:,~maskCol)]; %// same code used
Here is the result:
>> out
out =
0 1 0 0 0
1 1 0 0 0
0 0 0 1 1
0 0 1 1 0
0 0 1 0 1
Note: This approach may work for most of the cases but still may fail for some rare cases.
Here, is an example:
%// this works well.
A = [0 0 1 0 1 0
1 0 0 1 0 0
0 1 0 0 0 1
1 0 0 1 0 0
0 0 1 0 1 0
0 1 0 0 1 1];
%// This may not
%// Second col, last row changed to zero from one
A = [0 0 1 0 1 0
1 0 0 1 0 0
0 1 0 0 0 1
1 0 0 1 0 0
0 0 1 0 1 0
0 0 0 0 1 1];
Why does it fail?
As we loop through each row (to find the column mask), for eg, when we move to 3rd row, none of the cols match the first row (current maskCol). So the only information carried by 3rd row (2nd element) is lost.
This may be the rare case because some other row might still contain the same information. See the first example. There also none of the elements of third row matches with 1st row but since the last row has the same information (1 at the 2nd element), it gave correct results. Only in rare cases, similar to this might happen. Still it is good to know this disadvantage.
Method 3
This one is Brute-force Alternative. Could be applied if you think the previous case might fail. Here, we use while loop to run the previous code (finding row and col mask) number of times with updated maskCol, so that it finds the correct mask.
Procedure:
maskCol = A(1,:);
count = 1;
while(count<3)
for ii = 2:size(A,1)
if sum(A(ii,:) & maskCol)>0
maskCol = maskCol | A(ii,:);
end
end
count = count+1;
end
Previous example is taken (where the previous method fails) and is run with and without while-loop
Without Brute force:
>> out
out =
1 0 1 0 0 0
1 0 1 0 0 0
0 0 0 1 1 0
0 1 0 0 0 1
0 0 0 1 1 0
0 0 0 0 1 1
With Brute-Forcing while loop:
>> out
out =
1 1 0 0 0 0
1 1 0 0 0 0
0 0 0 1 1 0
0 0 1 0 0 1
0 0 0 1 1 0
0 0 0 0 1 1
The number of iterations required to get the correct results may vary. But it is safe to have a good number.
Good Luck!

An algorithm to detect permutations of Hankel matrices

I am trying to write code to detect if a matrix is permutation of a Hankel matrix but I can't think of an efficient solution other than very slow brute force. Here is the spec.
Input: An n by n matrix M whose entries are 1 or 0.
Input format: Space separated rows. One row per line. For example
0 1 1 1
0 1 0 1
0 1 0 0
1 0 1 1
Output: A permutation of the rows and columns of M so that M is a Hankel matrix if that is possible. A Hankel matrix has constant skew-diagonals (positive sloping diagonals).
When I say a permutation, I mean we can apply one permutation to the order of the rows and a possibly different one to the columns.
I would be very grateful for any ideas.
Without Loss of Generality, we will assume that there are fewer 0's than 1's. We can then find the possible diagonals in a Hankel Matrix that could be 0's to give us the appropriate number of 0's in the entire matrix. And, this will give us the possible Hankel matrices. From there, you can count the number of 0's in each column, and compare it to the number of 0's in the columns of the original matrix. Once you have done this, you have a much smaller space in which to perform a brute force search: permuting on columns and rows that have the right number of 0's.
Example: OP's suggested a 4x4 matrix with 7 0's. We need to partition this using the set {4,3,3,2,2,1,1}. So, or partitions would be:
{4,3}
{4,2,1} (2 of these matrices)
{3,3,1}
{3,2,2}
{3,2,1,1} (2 of these matrices)
And this gives us the Hankel Matrices (excluding symmetries)
1 1 0 0 1 1 1 0 0 1 1 0 1 1 0 1
1 0 0 1 1 1 0 1 1 1 0 1 1 0 1 0
0 0 1 1 1 0 1 0 1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 0 0 1 0 1 1 0 1 0
1 0 0 1 0 1 1 1 0 1 0 1
0 0 1 1 1 1 1 0 1 0 1 1
0 1 1 0 1 1 0 0 0 1 1 0
1 1 0 1 1 0 0 0 1 1 0 0
The original matrix had columns with 3, 1, 2, and 1 0's in its four columns. Comparing this to the 7 possible Hankel matrices gives us 2 possibilities
1 1 1 0 0 1 1 1
1 1 0 1 1 1 1 0
1 0 1 0 1 1 0 0
0 1 0 0 1 0 0 0
Now, there are only 4 possible permutations that could map the original matrix to each of these: we have only 1 choice based on the columns with 2 and 3 0's, but 2 choices for the columns with 1 0's, and also 2 choices for the rows with 1 0's. Checking those permutations, we see that the following Hankel matrix is a permutation of the original
0 1 1 1
1 1 1 0
1 1 0 0
1 0 0 0
The one thing which the first answer to this question got right is that permuting the rows and columns doesn't change the row sums or column sums.
Another easy observation is that in a Hankel matrix, the difference in row sum between two consecutive rows is -1, 0, or 1, and each case gives us a constraint on the rows. If the difference is 0 then the entering variable is equal to the exiting variable; otherwise we know which is 0 and which is 1.
0 1 1 1
0 1 0 1
0 1 0 0
1 0 1 1
has row sums 3, 2, 1, 3. The orders which respect the difference requirement are 1 2 3 3 and 3 3 2 1, and wlog we can discard reversals because reversing the row and column permutations just rotates the matrix by 180 degrees. Therefore we reduce to considering four permuted matrices (two possible orderings of the 3s in the row sums, and two in the column sums):
0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1
0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1
0 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0
1 1 0 1 0 1 1 1 1 1 1 0 0 1 1 1
We could actually have taken the analysis further by observing that by forcing the initial rows to have sums 1 and 2 we constrain the order of the columns with sum 3, since
0 0 1 0
0 0 1 1
is not a valid initial two rows of a Hankel matrix. Whether or not this kind of reasoning is easy to implement depends on your programming paradigm.
Note that in the worst case this kind of reasoning still doesn't leave a polynomial number of cases to brute force through.
Here are some ideas.
1)
Row and Column permutations preserve the row and column sums:
1 0 1 0 - 2
0 0 0 1 - 1 row sums
1 0 0 0 - 1
1 1 1 0 - 3
| | | |
3 1 2 1
column sums
Whichever way you permute the rows, the row sums will still be {2, 1, 1, 3} in some permutation; the column sums will be unchanged. And vice versa. Hankel matrices and their permutations will always have the same set of row sums as column sums. This gives you a quick test to rule out a set of non-viable matrices.
2)
I posit that Hankel matrices can always be permuted in such a way that their row and column sums are in ascending order, and the result is still a Hankel matrix:
0 1 1 0 - 2 0 0 0 1 - 1
1 1 0 0 - 2 0 0 1 1 - 2
1 0 1 1 - 3 --> 0 1 1 0 - 2
0 0 1 0 - 1 1 1 0 1 - 3
| | | | | | | |
2 2 3 1 1 2 2 3
Therefore if a matrix can be permuted into a Hankel matrix, then it can also be permuted into a Hankel matrix of ascending row and column sum. That is, we can reduce the number of permutations needed to test by only testing permutations where the row and column sums are in ascending order.
3)
I posit further that for any Hankel matrix where two or more rows have the same sum, every permutation of columns has a matching permutation of rows that also produces a Hankel matrix. That is, if a Hankel matrix exists for one permutation of columns, then it exists for every permutation of columns - since we can simply apply that same permutation to the corresponding rows and achieve a symmetrical result.
The upshot is that we only need to test permutations of rows or columns, not rows and columns.
Applied to the original example:
1 0 1 0 - 2 0 0 0 1 0 1 0 0 - 1 0 0 0 1
0 0 0 1 - 1 1 0 0 0 0 0 0 1 - 1 0 1 0 0
1 0 0 0 - 1 --> 1 0 1 0 --> 0 0 1 1 - 2 --> 0 0 1 1 = Hankel!
1 1 1 0 - 3 1 1 1 0 1 0 1 1 - 3 1 0 1 1
| | | |
3 1 2 1 permute rows into| ditto | try swapping
ascending order | for columns | top 2 rows
4)
I posit, finally, that every Hankel matrix where there are multiple rows and columns with the same sum can be permuted into another Hankel matrix with the property that those rows and columns are in increasing order when read as binary numbers - reading left-to-right for rows and top-to-bottom for columns. That is:
0 1 1 0 0 1 0 1 0 0 1 1
1 0 0 1 0 1 1 0 0 1 0 1 New
1 0 1 0 --> 1 0 0 1 --> 1 0 1 0 Hankel
0 1 0 1 1 0 1 0 1 1 0 0
Original rows columns
Hankel ascending ascending
If this is true (and I'm still undecided), then we only ever need to create and test one permutation of any given input matrix. That permutation puts both the rows and columns in order of ascending sum, and in the case of equal sums, orders them by their binary number interpretations. If this resultant matrix is not Hankel, then there is no permutation that will make it Hankel.
Hope that gets you on the way to an algorithm!
Addendum: Counterexamples?
Trying #orlp's example:
0 0 1 0 0 0 1 0 0 0 0 1
0 1 0 1 0 1 0 1 0 1 1 0
1 0 1 1 --> 0 1 1 1 --> 0 1 1 1
0 1 1 1 1 0 1 1 1 0 1 1
(A) (B) (C)
A: Original Hankel. Row sums are 1, 2, 3, 3; Rows 3 and 4 are not in binary order.
B: Swap rows 3 and 4. Columns 3 and 4 are not in binary order.
C: Swap columns 3 and 4. Result is Hankel and satisfies all the properties.
Trying #Degustaf's example:
1 1 0 1 0 1 0 0 0 0 1 0
1 0 1 0 1 0 0 1 0 1 0 1
0 1 0 0 --> 1 0 1 0 --> 1 0 0 1
1 0 0 1 1 1 0 1 0 1 1 1
(A) (B) (C)
A: Original Hankel matrix. Row sums are 3, 2, 1, 2.
B: Rearrange so that the row sums are 1, 2, 2, 3, and the rows of sum 2 are in ascending binary order (i.e. 1001, 1010)
C: Rearrange column sums to 1, 2, 2, 3, with the two columns of sum 2 in order (0101, 1001). Result is Hankel and satisfies all the properties. Note also that the permutation on the columns matches the permutation on the rows: the new column order from the old one is {3, 4, 2, 1}, the same operation to get from A to B.
Note: I suggest the binary order (#4) only for tiebreak situations on the row or column sum, not as a replacement for the sort in (#2).

Finding all subsets of a multiset

Suppose I have a bag which contains 6 balls (3 white and 3 black). I want to find all possible subsets of a given length, disregarding the order. In the case above, there are only 4 combinations of 3 balls I can draw from the bag:
2 white and 1 black
2 black and 1 white
3 white
3 black
I already found a library in my language of choice that does exactly this, but I find it slow for greater numbers. For example, with a bag containing 15 white, 1 black, 1 blue, 1 red, 1 yellow and 1 green, there are only 32 combinations of 10 balls, but it takes 30 seconds to yield the result.
Is there an efficient algorithm which can find all those combinations that I could implement myself? Maybe this problem is not as trivial as I first thought...
Note: I'm not even sure of the right technic words to express this, so feel free to correct the title of my post.
You can do significantly better than a general choose algorithm. The key insight is to treat each color of balls at the same time, rather than each of those balls one by one.
I created an un-optimized implementation of this algorithm in python that correctly finds the 32 result of your test case in milliseconds:
def multiset_choose(items_multiset, choose_items):
if choose_items == 0:
return 1 # always one way to choose zero items
elif choose_items < 0:
return 0 # always no ways to choose less than zero items
elif not items_multiset:
return 0 # always no ways to choose some items from a set of no items
elif choose_items > sum(item[1] for item in items_multiset):
return 0 # always no ways to choose more items than are in the multiset
current_item_name, current_item_number = items_multiset[0]
max_current_items = min([choose_items, current_item_number])
return sum(
multiset_choose(items_multiset[1:], choose_items - c)
for c in range(0, max_current_items + 1)
)
And the tests:
print multiset_choose([("white", 3), ("black", 3)], 3)
# output: 4
print multiset_choose([("white", 15), ("black", 1), ("blue", 1), ("red", 1), ("yellow", 1), ("green", 1)], 10)
# output: 32
No, you don't need to search through all possible alternatives. A simple recursive algorithm (like the one given by #recursive) will give you the answer. If you are looking for a function that actually outputs all of the combinations, rather than just how many, here is a version written in R. I don't know what language you are working in, but it should be pretty straightforward to translate this into anything, although the code might be longer, since R is good at this kind of thing.
allCombos<-function(len, ## number of items to sample
x, ## array of quantities of balls, by color
names=1:length(x) ## names of the colors (defaults to "1","2",...)
){
if(length(x)==0)
return(c())
r<-c()
for(i in max(0,len-sum(x[-1])):min(x[1],len))
r<-rbind(r,cbind(i,allCombos(len-i,x[-1])))
colnames(r)<-names
r
}
Here's the output:
> allCombos(3,c(3,3),c("white","black"))
white black
[1,] 0 3
[2,] 1 2
[3,] 2 1
[4,] 3 0
> allCombos(10,c(15,1,1,1,1,1),c("white","black","blue","red","yellow","green"))
white black blue red yellow green
[1,] 5 1 1 1 1 1
[2,] 6 0 1 1 1 1
[3,] 6 1 0 1 1 1
[4,] 6 1 1 0 1 1
[5,] 6 1 1 1 0 1
[6,] 6 1 1 1 1 0
[7,] 7 0 0 1 1 1
[8,] 7 0 1 0 1 1
[9,] 7 0 1 1 0 1
[10,] 7 0 1 1 1 0
[11,] 7 1 0 0 1 1
[12,] 7 1 0 1 0 1
[13,] 7 1 0 1 1 0
[14,] 7 1 1 0 0 1
[15,] 7 1 1 0 1 0
[16,] 7 1 1 1 0 0
[17,] 8 0 0 0 1 1
[18,] 8 0 0 1 0 1
[19,] 8 0 0 1 1 0
[20,] 8 0 1 0 0 1
[21,] 8 0 1 0 1 0
[22,] 8 0 1 1 0 0
[23,] 8 1 0 0 0 1
[24,] 8 1 0 0 1 0
[25,] 8 1 0 1 0 0
[26,] 8 1 1 0 0 0
[27,] 9 0 0 0 0 1
[28,] 9 0 0 0 1 0
[29,] 9 0 0 1 0 0
[30,] 9 0 1 0 0 0
[31,] 9 1 0 0 0 0
[32,] 10 0 0 0 0 0
>

How can I find a solution of binary matrix equation AX = B?

Given an m*n binary matrix A, m*p binary matrix B, where n > m what is an efficient algorithm to compute X such that AX=B?
For example:
A =
1 1 0 0 1 1 0 1 0 0
1 1 0 0 1 0 1 0 0 1
0 1 1 0 1 0 1 0 1 0
1 1 1 1 1 0 0 1 1 0
0 1 1 0 1 0 1 1 1 0
B =
0 1 0 1 1 0 1 1 0 1 0 0 1 0
0 0 1 0 1 1 0 0 0 1 0 1 0 0
0 1 1 0 0 0 1 1 0 0 1 1 0 0
0 0 1 1 1 1 0 0 0 1 1 0 0 0
1 0 0 1 0 0 1 0 1 0 0 1 1 0
Note, when I say binary matrix I mean matrix defined over the field Z_2, that is, where all arithmetic is mod 2.
If it is of any interest, this is a problem I am facing in generating suitable matrices for a random error correction code.
You can do it with row reduction: Place B to the right of A, and then swap rows (in the whole thing) to get a 1 in row 0, col 0; then xor that row to any other row that has a '1' in column 0, so you have only a single 1 in column 0. Then move to the next column; if [1,1] is zero then swap row 1 with a later row that has a 1 there, then xor rows to make it the only 1 in the column. Assuming 'A' is a square matrix and a solution exists, then you eventually have converted A to unity, and B is replaced with the solution to Ax=B.
If n > m, you have a system with more unknowns than equations, so you can solve for some of the unknowns, and set the others to zero. During the row reduction, if there are no values in a column which have a '1' to use (below the rows already reduced) you can skip that column and make the corresponding unknown zero (you can do this at most n-m times).

Resources