Difference between symmetric and asymmetric matrix - algorithm

I'm practising with graphs and adjacency matrices. But I couldn't find a good example that differentiates symmetric and asymmetric matrix. Can anyone tell me how to distinguish the difference between symmetric or asymmetric matrix.

An adjacency matrix is symmetric if it is derived from an undirected graph.
That means, the path from node A -> B has the same cost/weight/length as the path from node B -> A.
If you create the adjacency matrix M, it will be symmetric, meaning that for any i and j, M[i][j] == M[j]i]. More mathematically, the matrix is identical to its transpose. So if you transpose your matrix, it will look exactly the same. Graphically, such a matrix looks like this:
0 2 3 4
2 0 5 6
3 5 0 7
4 6 7 0
Due to the symmetry, you can often represent it using less memory. For algorithms like the Floyd-Warshall-algorithm on undirected graphs, you can reduce the amount of computation by 50% since you only need to compute half of the matrix:
0 2 3 4
0 5 6
0 7
0
For comparison, an asymmetric matrix:
0 2 3 9 <--
2 0 5 6
3 5 0 7
4 6 7 0
Note, that it is almost identical to the previous example, but in the upper right corner, there is a 9. So it is no longer possible to mirror the matrix along it's diagonal axis.

You can check the example of symmetric graph

Related

Count pair of nodes that can't reach each other if you for each deleted edge

Question:
Given a undirected graph of N nodes and M edges. You need to solve 2 problems:
Problem 1: For each edge j (j : 1 -> M), if you delete that edge, count the number of nodes that can't reach each other (there's no path between that 2 nodes).
Problem 2: For each node i (i : 1 -> N), if you delete that node (which also deleted all of the edges connected to it), count the number of nodes that can't reach each other.
Example:
N = 6, M = 7
1 2
2 3
3 1
3 4
4 5
5 6
6 4
(Edges are described as u - v)
Result:
For each edge j (j : 1 -> M): 0 0 0 9 0 0 0
For each node i (i : 1 -> N): 0 0 6 6 0 0
P/S: I have been thinking for many days but can't find a proper answer for this problem
IF initial graph is connected, then the first problem is searching of bridges and the second one is searching of cut vertex / articulation points.
After revealing of bridge get sizes of connected components (there are two of them) and needed result is product of sizes (for example, components of size 2 and size 3 give 6 pairs)
After revealing of cut vertex number of components might be larger, and result is sum of pairwise products of sizes (for components with sizes 1,2,3 result is 1*2+1*3+2*3=11 pairs)
C++ code for solving both problems using DFS could be found here

Program to check if N^2 number of elements can be converted to a N*N symmetric matrix?

I was solving some problems on matrices the other day when this question hit me. Is there any way we can check if N^2 number of elements can be arranged in such a way that they form a symmetric matrix?
For instance, if N=3, then N^2=9
Let the elements be : 1 2 3 1 2 3 1 2 3.
The above elements can be arranged to form a symmetric matrix like:-
1 2 3
2 3 1
3 1 2
Similarly, 9 1s can be used to form a matrix as follows:-
1 1 1
1 1 1
1 1 1
But the elements 1 2 3 4 5 6 7 8 9, can in no way be arranged to form a symmetric matrix.
I thought about this question a lot but could not come up with a solution. Could someone please help me?
In an N×N symmetric matrix, every entry above the main diagonal has an equal counterpart below the main diagonal. This means that, aside from the N elements on the main diagonal, all elements come in equal pairs. (Elements on the main diagonal can also come in equal pairs, but they're not required to; the matrix's symmetry isn't affected by, for example, whether a22 = a33 or not.)
So, you can simply count how often each distinct value occurs, and see how many of the values occur an odd number of times. If there are N or fewer distinct values that occur an odd number of times, then the main diagonal of an N×N matrix can accommodate the unpaired values, so a symmetric matrix is possible; otherwise, not.

Incidence matrices

Permutation of any two rows or columns in an incidence matrix simply corresponds to relabelling the vertices and edges of the same graph. Conversely, two graphs X and Y are isomorphic if and only if their incidence matrices A(X) and A(Y) differ only by permutations of rows and columns.
Can someone explain me what does it mean, with an example. What exactly does "permutation of any two rows or columns" over hear means?
"Permutation" here means "exchange". Consider the following node-node incidence matrix:
0 1 0
0 0 1
1 0 0
It defines a graph with vertices 0, 1, 2 where the edges constitue a circle 0-1-2-0. If the first two rows are exchanged, we obtain
0 0 1
0 1 0
1 0 0
where the circle is 0-2-1-0. This graph is obtained from the initial graph by relabelling 1 to 2 and vice versa. This means that both graphs are "identical up to renaming of vertices", i.e. they are isomorphic.

Algorithm to maximize the smallest diagonal element of a matrix

Suppose we are given a square matrix A. Our goal is to maximize the smallest diagonal element by row permutations. In other words, for the given matrix A, we have n diagonal elements and thus we have the minimum $min{d_i}$. Our purpose is to reach the matrix with possibly largest minimum diagonal element by row permutations.
This is like $max min{d_i}$ over all row permutations.
For example, suppose A = [4 3 2 1; 1 4 3 2; 2 1 4 3; 2.5 3.5 4.5 1.5]. The diagonal is [4, 4, 4, 1.5]. The minimum of the diagonal is 1.5. We can swap row 3 and 4 to get to a new matrix \tilde_A = [4 3 2 1; 1 4 3 2; 2.5 3.5 4.5 1.5; 2 1 4 3]. The new diagonal is [4, 4, 4.5, 3] with a new minimum 3. And in theory, this is the best result I can obtain because there seems no better option: 3 seems to be the max min{d_i}.
In my problem, n is much larger like 1000. I know there are n! row permutations so I cannot go through each permutation in theory. I know greedy algorithm will help--we start from the first row. If a_11 is not the smallest in the first column, we swap a_11 with the largest element in the first column by row permutation. Then we look at the second row by comparing a_22 with all remaining elements in the second column(except a_12). Swap a_22 if it is not the smallest. ... ... etc. We keep doing this till the last row.
Is there any better algorithm to do it?
This is similar to Minimum Euclidean Matching but they are not the same.
Suppose you wanted to know whether there was a better solution to your problem than 3.
Change your matrix to have a 1 for every element that is strictly greater than 3:
4 3 2 1 1 0 0 0
1 4 3 2 0 1 0 0
2.5 3.5 4.5 1.5 -> 0 1 1 0
2 1 4 3 0 0 1 0
Your problem can be interpreted as trying to find a perfect matching in the bipartite graph which has this binary matrix as its biadjacency graph.
In this case, it is easy to see that there is no way of improving your result because there is no way of reordering rows to make the diagonal entry in the last column greater than 3.
For a larger matrix, there are efficient algorithms to determine maximal matchings in bipartite graphs.
This suggests an algorithm:
Use bisection to find the largest value for which the generated graph has a perfect matching
The assignment corresponding to the perfect matching with the largest value will be equal to the best permutation of rows
EDIT
This Python code illustrates how to use the networkx library to determine whether the graph has a perfect matching for a particular cutoff value.
import networkx as nx
A = [[4,3,2,1],
[1,4,3,2],
[2,1,4,3],
[2.5,3.5,4.5,1.5]]
cutoff = 3
G=nx.DiGraph()
for i,row in enumerate(A):
G.add_edge('start','row'+str(i),capacity=1.0)
G.add_edge('col'+str(i),'end',capacity=1.0)
for j,e in enumerate(row):
if e>cutoff:
G.add_edge('row'+str(i),'col'+str(j),capacity=1.0)
if nx.max_flow(G,'start','end')<len(A):
print 'No perfect matching'
else:
print 'Has a perfect matching'
For a random matrix of size 1000*1000 it takes about 1 second on my computer.
Let $x_{ij}$ be 1 if row i is moved to row j and zero otherwise.
You're interested in the following integer program:
max z
\sum_{i=0}^n x_{ij} = 1 \forall j
\sum_{j=0}^n x_{ij} = 1 \forall i
A[j,j]x_{ij} >= z
Then plug this into GLPK, Gurobi, or CPLEX. Alternatively, solve the IP using your own branch and bound solve.

Finding largest connected tree in a matrix

Assume I have a matrix MxN, filled with values between 0 and 5. I now want to determine the largest connected tree in that matrix, where the values of the matrix are considered to be the nodes. A pair of nodes is said to be connected if it's nodes are adjacent to each other either horizontally or vertically, and if the value of both nodes is the same. The size of a tree is equal to the nodes in the tree.
An example:
1 0 3 0 0 2 2 0 0 0 0 0
1 1 2 2 2 0 2 0 0 0 0 0
0 1 0 3 0 0 2 0 0 0 0 2
3 1 0 3 0 0 2 0 2 2 2 2
0 0 0 0 0 0 0
3 0 0 3 3 0 0
3 3 3 3 0 0 0
On the left side, the 1-nodes on the left side form the largest tree. On the right side, the 3-nodes form the largest tree, while there are two other trees consisting of 2-nodes.
I know I could probably do a simple depth-first search, but I'm wondering if there is something well-known that I'm missing, maybe in the realm of graph theory (like Kruskal's minimum spanning tree algorithm, but for this example).
You are looking for disjoint sets so I would suggest a disjoint-set data structure and a find/union algorithm:
see http://en.wikipedia.org/wiki/Disjoint-set_data_structure#Disjoint-set_forests
The union operation is symmetric so you really only need to compare each element of the matrix with its neighbor to the right and its neighbor below applying the union operation when the compared elements have the same value.
Sweep through each of the elements again using the find operation to count the size of each set keeping track of the largest. You will need storage for the counts.
The computational complexity will be O(MN A-1(MN,MN)) where A-1 is the Inverse Ackermann function, which one can consider a small constant (< 5) for any useful value of MN. And the extra storage complexity will be O(MN).
Effectively what you're looking for are Connected components. Connected component is a set of nodes, where you could travel from any node to any other within that component.
Connected components are applicable generally to a graph. Connected components can be found using BFS/DFS and from algorithm complexity perspective given adjacency matrix input there is no better way to do that. The running time of the algorithm is O(N^2), where N is a number of nodes in a graph.
In your case graph has more constrains, such as each node can be adjacent to at most 4 other nodes. With BFS/DFS, this gives you a running time of O(4N) = O(N), where N is a number of nodes. There can't possibly be an algorithm with a better complexity, as you need to consider each node at least once in the worst case.

Resources