Algorithm - Finding the number of pairs with diameter distance in a tree? - algorithm

I have a non-rooted bidirectional unweighted non-binary tree. I know how to find the diameter of the tree, the greatest distance between any pair of points in the tree, but I'm interested in finding the number of pairs with that max distance. Is there an algorithm to find the number of pairs with diameter distance in better than O(V^2) time, where V is the number of nodes?
Thank you!

Yes, there's a linear-time algorithm that operates bottom-up and resembles the algorithm for just finding the diameter. Here's the signature in Java-ish pseudocode; I'll leave the algorithm itself as an exercise.
class Node {
Collection<Node> children;
}
class Result {
int height; // height of the tree
int num_deep_nodes; // number of nodes whose depth equals the height
int diameter; // length of the longest path inside the tree
int num_long_paths; // number of pairs of nodes at distance |diameter|
}
Result computeNumberOfLongPaths(Node root); // recursive

Yes there is an algorithm with O(V+E) time.It is simply a modified version of finding the diameter.
As we know we can find the diameter using two calls of BFS by first making first call on any node and then remembering the last node discovered u and running a second call BFS(u),and remembering the last node discovered ,say v.The distance between u and v gives us the diameter.
Coming to number of pairs with that max distance.
1.Before invoking the first BFS,initialize an array distance of length |V| and distance[s]=0.s is the starting vertex for first BFS call on any node.
2.In the BFS,modify the while loop as:
while(Q is not empty)
{
e=deque(Q);
for all vertices w adjacent to e
{
if(w is not visited)
{
enque(w)
mark w as visited
distance[w]=distance[e]+1
parent[w]=e
}
}
}
3.Like I said,remembering the last node visited,say u is that node. Now counting the number of vertices that are at the same level as vertex u. mark is an array of length n,which has all its value initialized to 0,0 implies that vertex not counted initially.
n1=0
for i = 1 to number of vertices
{
if(distance[i]==distance[u]&&mark[i]==0)
{
n1++
mark[i]=1/*vertex counted*/
}
}
n1 gives the number of vertices,that are at the same level as vertex u,now for all vertices that have mark[i] = 1 ,are marked and they will not be counted again.
4.Similarly before performing second BFS on u,initialize another array distance2 of length |V| and distance2[u]=0.
5.Run BFS(u) and again get the last node discovered say v
6.Repeat 3rd step,this time on distance2 array and taking a different variable say n2=0 and the condition being
if(distance2[i]==distance2[v]&&mark[i]==0)
n2++
else if(distance2[i]==distance2[v]&&mark[i]==1)
set_common=1
7.set_common is a global variable that is set when there are a set of vertices such that between any two vertices the path is that of a diameter and the first bfs did not mark all those vertices but did mark at least one of those that is why mark[i]==1.
Suppose that first bfs did mark all such vertices in first call then n2 would be = 0 and set_common would not be set and there is no need also.But this situation is same as above
In any case the number of pairs giving diameter are:=
(n+n2)combination2 - X=(n1+n2)!/((2!)((n1+n2-2)!)) - X
I will elaborate on what X is.Else the number of pairs are = n1*n2,which is the case when 2 disjoint set of vertices are giving the diameter
So the Condition used is
if(n2==0||set_common==1)
number_of_pairs=(n1+n2)C2-X
else n1*n2
Now talking about X.It can occur that the vertices that are marked may have common parent.In that case we must not count there combinations.So before using the above condition it is advised to run the following algorithm
X=0/*Initialize*/
for(i = 1 to number of vertices)
{
s = 0,p = -1
if(mark[i]==0)
continue
else
{
s++
if(p==-1)
p=parent[i]
while((i+1)<=number_of_vertices&& p==parent[i+1])
{s++;i++}
}
if(s>1)
X=X+sC2
}
Proof of correctness
It is very easy.Since BFS traverses a tree level by level,n1 will give you the number of vertices at the level of u and n2 gives you the number of vertices at the level of v and since the distance between u and v = diameter.Therefore, distance between any vertex on level of u and any vertex on level of v will be equal to diameter.
The time taken is 2(|V|) + 2*time_of_DFS=O(V+E).

Related

Count number of Node - Disjoint Paths between any two nodes in directed graph such that there distance is <=K

How can we count number of node - disjoint paths between any two nodes such that distance between two nodes is maximum K?
Details about node - disjoint path can be found here.
We are given a directed graph where we have to count number of node - disjoint path from vertex u to v such that maximum number of nodes between them is K - 2 (u and v are decremented from K , therefore K - 2). Number of vertices in graph can be up to than 10^5 and edges can be 6 * 10^5. I thought of implementing BFS for every node until maximum distance from source node is less than K. But I am not getting idea for implementation. Anybody please help me?
If anybody have idea to solve it using DFS, please share it.
DFS is the key to solve such problems. We can easily enumerate all the possible paths between 2 vertices using DFS, but we have to take care of the distance constraint.
My algorithm considers the number of edges traversed as a constraint. You can easily convert it to number of nodes traversed. Take that as an excercise.
We keep track of the number of edges traversed by variable e. If e becomes greater than K - 2, we terminate that recursive DFS call.
To maintain that a vertex has been visited we keep a boolean array visited. But if a recursive call terminates without finding a successful path, we discard any changes made to the array visited.
Only if a recursive DFS call is successful in finding a path, then we retain the visited array for the rest of the program.
So the pseudocode for the algorithm would be:
main function()
{
visited[source] = 1
e = 0//edges traversed so far.
sum = 0// the answer
found = false// found a path.
dfs(source,e)
print sum
.
.
.
}
dfs(source,e)
{
if(e > max_distance)
{
return
}
if(e <= max_distance and v == destination)
{
found = true
sum++
return
}
for all unvisited neighbouring vertices X of v
{
if(found and v != source)
return;
if(found and v == source)
{
found = false
visited[destination] = 0
}
visited[X] = 1
dfs(X , e + 1)
if(!found)
visited[X] = 1
}
}

Minimum Spanning tree different from another

Assume we are given
an undirected graph g where every node i,1 <= i < n is connected to all j,i < j <=n
and a source s.
We want to find the total costs (defined as the sum of all edges' weights) of the cheapest minimum spanning tree that differs from the minimum distance tree of s (i.e. from the MST obtained by running prim/dijkstra on s) by at least one edge.
What would be the best way to tackle this? Because currently, I can only think of some kind of fixed-point iteration
run dijkstra on (g,s) to obtain reference graph r that we need to differ from
costs := sum(edge_weights_of(r))
change := 0
for each vertex u in r, run a bfs and note for each reached vertex v the longest edge on the path from u to v.
iterate through all edges e = (a,b) in g: and find e'=(a',b') that is NOT in r and minimizes newchange := weight(e') - weight(longest_edge(a',b'))
if(first_time_here OR newchange < 0) then change += newchange
if(newchange < 0) goto 4
result := costs + change
That seems to waste a lot of time... It relies on the fact that adding an edge to a spanning tree creates a cycle from which we can remove the longest edge.
I also thought about using Kruskal to get an overall minimum spanning tree and only using the above algorithm to replace a single edge when the trees from both, prim and kruskal, happen to be the same, but that doesn't seem to work as the result would be highly dependent on the edges selected during a run of kruskal.
Any suggestions/hints?
You can do it using Prim`s algorithm
Prim's algorithm:
let T be a single vertex x
while (T has fewer than n vertices)
{
1.find the smallest edge connecting T to G-T
2.add it to T
}
Now lets modify it.
Let you have one minimum spanning tree. Say Tree(E,V)
Using this algorithm
Prim's algorithm (Modified):
let T be a single vertex
let isOther = false
while (T has fewer than n vertices)
{
1.find the smallest edge (say e) connecting T to G-T
2.If more than one edge is found, {
check which one you have in E(Tree)
choose one different from this
add it to T
set isOther = true
}
else if one vertex is found {
add it to T
If E(Tree) doesn`t contain this edge, set isOther = true
Else don`t touch isOther ( keep value ).
}
}
If isOther = true, it means you have found another tree different from Tree(E,V) and it is T,
Else graph have single minimum spanning tree

how to find root of a directed acyclic graph

I need a method to find root of a directed acyclic graph.I am using boolean adjancency matix to represent graph in java.so please suggest.Also graph is unweighted graph
Just find the node where indegree is 0. For below algorithm to work we assume that none of nodes in graph are isolated.
int indegree[N]={0};
for(i=0;i<n;++i){
for(j=0;j<n;++j){
if(graph[i][j]==1){ //assuming edge from i to j
indegree[j]++;
}
}
}
for(int i=0;i<n;++i){
if(indegree[i]==0) add i to roots;
}
You are looking for nodes with no in-edges. If the adjacency matrix is encoded so that entry (i,j) contains a 1 if and only if there is an edge from i to j, then for node K to be a root, there must be no edges of the form i->K, therefore no 1's in entries of the form (i, K). So you are looking for columns K with all zeros. Each such column is a root.
In pseudocode,
roots = {}
for k in 1 to N
for i in 1 to N
if adjacencies[i, k] > 0
continue with next k value
add k to roots
It can be done in linear time. It is basically doing DFS over the graph with all the edges reversed.
Pick up any vertex in the given graph G
Check if the vertex has in-degree equal to 0. If it does we have found a vertex which is root of the graph.
If not then, mark the current vertex v as visited and repeat the same process over all the unvisited parents of v.
This will fetch all the required vertices with in-degree equal to zero or roots of a DAG.

determine whether an undirected graph is a tree

I have written an algorithm to determine "whether an undirected graph is a tree"
Assumptions : graph G is represented as adjacency list, where we already know the number of vertices which is n
Is_graph_a_tree(G,1,n) /* using BFS */
{
-->Q={1} //is a Queue
-->An array M[1:n], such that for all i, M[i]=0 /* to mark visited vertices*/
-->M[1]=1
-->edgecount=0 // to determine the number of edges visited
-->While( (Q is not empty) and (edgecount<=n-1) )
{
-->i=dequeue(Q)
-->for each edge (i,j) and M[j] =0 and edgecount<=n-1
{
-->M[j]=1
-->Q=Q U {j}
-->edgecount++
}
}
If(edgecount != n-1)
--> print “G is not a tree”
Else
{
-->If there exists i such that M[i]==0
Print “ G is not a tree”
Else
Print “G is tree”
}
}
Is it right??
Is the time complexity of this algorithm Big0h(n)??
I think the counting of edges is not correct. You should also count edges (i,j) for witch M[j]=1 but j is not the parent of i (so you would also need to keep the parent of each node).
Maybe is better to count the edges at the end, by summing the sizes of the adjacency lists and dividing by 2.
You want to do a Depth First Search. An undirected graph has only back edges and tree edges. So you can just copy the DFS algorithm and if you find a back edge then it's not a tree.

Finding the heaviest length-constrained path in a weighted Binary Tree

UPDATE
I worked out an algorithm that I think runs in O(n*k) running time. Below is the pseudo-code:
routine heaviestKPath( T, k )
// create 2D matrix with n rows and k columns with each element = -∞
// we make it size k+1 because the 0th column must be all 0s for a later
// function to work properly and simplicity in our algorithm
matrix = new array[ T.getVertexCount() ][ k + 1 ] (-∞);
// set all elements in the first column of this matrix = 0
matrix[ n ][ 0 ] = 0;
// fill our matrix by traversing the tree
traverseToFillMatrix( T.root, k );
// consider a path that would arc over a node
globalMaxWeight = -∞;
findArcs( T.root, k );
return globalMaxWeight
end routine
// node = the current node; k = the path length; node.lc = node’s left child;
// node.rc = node’s right child; node.idx = node’s index (row) in the matrix;
// node.lc.wt/node.rc.wt = weight of the edge to left/right child;
routine traverseToFillMatrix( node, k )
if (node == null) return;
traverseToFillMatrix(node.lc, k ); // recurse left
traverseToFillMatrix(node.rc, k ); // recurse right
// in the case that a left/right child doesn’t exist, or both,
// let’s assume the code is smart enough to handle these cases
matrix[ node.idx ][ 1 ] = max( node.lc.wt, node.rc.wt );
for i = 2 to k {
// max returns the heavier of the 2 paths
matrix[node.idx][i] = max( matrix[node.lc.idx][i-1] + node.lc.wt,
matrix[node.rc.idx][i-1] + node.rc.wt);
}
end routine
// node = the current node, k = the path length
routine findArcs( node, k )
if (node == null) return;
nodeMax = matrix[node.idx][k];
longPath = path[node.idx][k];
i = 1;
j = k-1;
while ( i+j == k AND i < k ) {
left = node.lc.wt + matrix[node.lc.idx][i-1];
right = node.rc.wt + matrix[node.rc.idx][j-1];
if ( left + right > nodeMax ) {
nodeMax = left + right;
}
i++; j--;
}
// if this node’s max weight is larger than the global max weight, update
if ( globalMaxWeight < nodeMax ) {
globalMaxWeight = nodeMax;
}
findArcs( node.lc, k ); // recurse left
findArcs( node.rc, k ); // recurse right
end routine
Let me know what you think. Feedback is welcome.
I think have come up with two naive algorithms that find the heaviest length-constrained path in a weighted Binary Tree. Firstly, the description of the algorithm is as follows: given an n-vertex Binary Tree with weighted edges and some value k, find the heaviest path of length k.
For both algorithms, I'll need a reference to all vertices so I'll just do a simple traversal of the Tree to have a reference to all vertices, with each vertex having a reference to its left, right, and parent nodes in the tree.
Algorithm 1
For this algorithm, I'm basically planning on running DFS from each node in the Tree, with consideration to the fixed path length. In addition, since the path I'm looking for has the potential of going from left subtree to root to right subtree, I will have to consider 3 choices at each node. But this will result in a O(n*3^k) algorithm and I don't like that.
Algorithm 2
I'm essentially thinking about using a modified version of Dijkstra's Algorithm in order to consider a fixed path length. Since I'm looking for heaviest and Dijkstra's Algorithm finds the lightest, I'm planning on negating all edge weights before starting the traversal. Actually... this doesn't make sense since I'd have to run Dijkstra's on each node and that doesn't seem very efficient much better than the above algorithm.
So I guess my main questions are several. Firstly, do the algorithms I've described above solve the problem at hand? I'm not totally certain the Dijkstra's version will work as Dijkstra's is meant for positive edge values.
Now, I am sure there exist more clever/efficient algorithms for this... what is a better algorithm? I've read about "Using spine decompositions to efficiently solve the length-constrained heaviest path problem for trees" but that is really complicated and I don't understand it at all. Are there other algorithms that tackle this problem, maybe not as efficiently as spine decomposition but easier to understand?
You could use a DFS downwards from each node that stops after k edges to search for paths, but notice that this will do 2^k work at each node for a total of O(n*2^k) work, since the number of paths doubles at each level you go down from the starting node.
As DasBoot says in a comment, there is no advantage to using Dijkstra's algorithm here since it's cleverness amounts to choosing the shortest (or longest) way to get between 2 points when multiple routes are possible. With a tree there is always exactly 1 way.
I have a dynamic programming algorithm in mind that will require O(nk) time. Here are some hints:
If you choose some leaf vertex to be the root r and direct all other vertices downwards, away from the root, notice that every path in this directed tree has a highest node -- that is, a unique node that is nearest to r.
You can calculate the heaviest length-k path overall by going through each node v and calculating the heaviest length-k path whose highest node is v, finally taking the maximum over all nodes.
A length-k path whose highest node is v must have a length-i path descending towards one child and a length-(k-i) path descending towards the other.
That should be enough to get you thinking in the right direction; let me know if you need further help.
Here's my solution. Feedback is welcome.
Lets treat the binary tree as a directed graph, with edges going from parent to children. Lets define two concepts for each vertex v:
a) an arc: which is a directed path, that is, it starts from vertex v, and all vertices in the path are children of the starting vertex v.
b) a child-path: which is a directed or non-directed path containing v, that is, it could start anywhere, end anywhere, and go from child of v to v, and then, say to its other child. The set of arcs is a subset of the set of child-paths.
We also define a function HeaviestArc(v,j), which gives, for a vertex j, the heaviest arc, on the left or right side, of length j, starting at v. We also define LeftHeaviest(v,j), and RightHeaviest(v,j) as the heaviest left and right arcs of length j respectively.
Given this, we can define the following recurrences for each vertex v, based on its children:
LeftHeaviest(v,j) = weight(LeftEdge(v)) + HeaviestArc(LeftChild(v)),j-1);
RightHeaviest(v,j) = weight(RightEdge(v)) + HeaviestArc(RightChild(v)),j-1);
HeaviestArc(v,j) = max(LeftHeaviest(v,j),RightHeaviest(v,j));
Here j here goes from 1 to k, and HeaviestArc(v,0)=LeftHeaviest(v,0),RightHeaviest(v,0)=0 for all. For leaf nodes, HeaviestArc(v,0) = 0, and HeaviestArc(v,j)=-inf for all other j (I need to think about corner cases more thoroughly).
And then HeaviestChildPath(v), the heaviest child-path containing v, can be calculated as:
HeaviestChildPath(v) = max{ for j = 0 to k LeftHeaviest(j) + RightHeaviest(k-j)}
The heaviest path should be the heaviest of all child paths.
The estimated runtime of the algorithm should be order O(kn).
def traverse(node, running_weight, level):
if level == 0:
if max_weight < running_weight:
max_weight = running_weight
return
traverse(node->left,running_weight+node.weight,level-1)
traverse(node->right,running_weight+node.weight,level-1)
traverse(node->parent,running_weight+node.weight,level-1)
max_weight = 0
for node in tree:
traverse(node,0,N)

Resources