Graph Centroid is a vertex at equal distance or at distance less than or equal (N/2) where N is the size of the connected components connected through this vertex?! [Needs Correction?!]
Here's a problem at CodeForces that asks to find whether each vertex is a centroid, but after removing and replacing exactly one edge at a time.
Problem Statement
I need help to refine this PseudoCode / Algorithm.
Loop all Vertices:
Loop all edges:
Position each edge in every empty edge position between two unconnected nodes
Count Size of each Connected Component (*1).
If all sizes are less than or equal N/2,
Then return true
The problem is that this algorithm will run in at least O(N*M^2)). It's not acceptable.
I looked up the answers, but I couldn't come up with the high level abstraction of the algorithm used by others. Could you please help me understand how these solutions work?
Solutions' Link
(*1) DFS Loop
I will try to describe you a not so complex algorithm for solving this problem in linear time, for future references see my code (it have some comments).
The main idea is that you can root the tree T at an arbitrary vertex and traverse it, for each vertex V you can do this:
Cut subtree V from T.
Find the heaviest vertex H having size <= N/2 (H can be in any of T or subtree V).
Move subtree H to become child of subtree V.
Re root T with V and find if the heaviest vertex have size <= N/2
The previous algorithm can be implemented carefully to get linear time complexity, the issue is that it have a lot of cases to handle.
A better idea is to find the centroid C of T and root T at vertex C.
Having vertex C as the root of T is useful because it guarantee us that every descendant of C have size <= N/2.
When traversing the tree we can avoid to check for the heaviest vertex down the tree but up, every time we visit a child W, we can pass the heaviest size (being <= N/2) if we re root T at W.
Try to understand what I explained, let me know if something is not clear.
Well, the Centroid of a tree can be determined in O(N) space and time complexity.
Construct a Matrix representing the Tree, with the row indexes representing the N nodes and the elements in the i-th row representing the nodes to which i-th node is connected. You can any other representation also.
Maintain 2 linear arrays of size N with index i representing the depth of i-th node (depth) and the parent of the i-th node (parent) respectively.
Also maintain 2 more linear arrays, the first one containing the BFS traversal sequence of the tree (queue), and the other one (leftOver) containing the value of [N - Number of nodes in the subtree rooted at that node]. In other words, the i-th index contains the number of nodes that is left in the whole tree when the i-th node is removed from the tree along with all its children.
Now, perform a BFS traversal taking any arbitrary node as root and fill the arrays 'parent' and 'depth'. This takes O(N) time complexity. Also, record the traversal sequence in the array 'queue'.
Starting from the leaf nodes, add the number of nodes present in the subtree rooted at that node, with the value at the parent index in the array 'leftOver'. This also takes O(N) time since you can use the already prepared 'queue' array and travel from backwards.
Finally, traverse through the array 'leftOver' and modify each value to [N-1 - Initial Value]. The 'leftOver' array is prepared. Cost: Another O(N).
Your work is almost done. Now, iterate over this 'leftOver' array and find the index whose value is closest to floor(N/2). However, this value must not exceed floor(N/2) at any cost.
This index is the index of the Centroid of the Tree. Overall time complexity: O(N).
Java Code:
import java.util.ArrayList;
import java.util.Iterator;
import java.util.Scanner;
class Find_Centroid
{
static final int MAXN=100_005;
static ArrayList<Integer>[] graph;
static int[] depth,parent; // Step 2
static int N;
static Scanner io=new Scanner(System.in);
public static void main(String[] args)
{
int i;
N=io.nextInt();
// Number of nodes in the Tree
graph=new ArrayList[N];
for(i=0;i<graph.length;++i)
graph[i]=new ArrayList<>();
//Initialisation
for(i=1;i<N;++i)
{
int a=io.nextInt()-1,b=io.nextInt()-1;
// Assuming 1-based indexing
graph[a].add(b); graph[b].add(a);
// Step 1
}
int centroid = findCentroid(new java.util.Random().nextInt(N));
// Arbitrary indeed... ;)
System.out.println("Centroid: "+(centroid+1));
// '+1' for output in 1-based index
}
static int[] queue=new int[MAXN],leftOver;
// Step 3
static int findCentroid(int r)
{
leftOver=new int[N];
int i,target=N/2,ach=-1;
bfs(r); // Step 4
for(i=N-1;i>=0;--i)
if(queue[i]!=r)
leftOver[parent[queue[i]]] += leftOver[queue[i]] +1;
// Step 5
for(i=0;i<N;++i)
leftOver[i] = N-1 -leftOver[i];
// Step 6
for(i=0;i<N;++i)
if(leftOver[i]<=target && leftOver[i]>ach)
// Closest to target(=N/2) but does not exceed it.
{
r=i; ach=leftOver[i];
}
// Step 7
return r;
}
static void bfs(int root) // Iterative
{
parent=new int[N]; depth=new int[N];
int st=0,end=0;
parent[root]=-1; depth[root]=1;
// Parent of root is obviously undefined. Hence -1.
// Assuming depth of root = 1
queue[end++]=root;
while(st<end)
{
int node = queue[st++], h = depth[node]+1;
Iterator<Integer> itr=graph[node].iterator();
while(itr.hasNext())
{
int ch=itr.next();
if(depth[ch]>0) // 'ch' is parent of 'node'
continue;
depth[ch]=h; parent[ch]=node;
queue[end++]=ch; // Recording the Traversal sequence
}
}
}
}
Now, for the problem, http://codeforces.com/contest/709/problem/E, iterate over each node i, consider it as the root, keep descenting down the child which has >N/2 nodes and try to arrive at a node which has just less than N/2 nodes (closest to N/2 nodes) under it. If, on removing this node along with all its children makes 'i' the centroid, print '1' otherwise print 0. This process can be carried out efficiently, as the 'leftOver' array is already there for you.
Actually, you are detaching the disturbing node (the node which is preventing i from being the centroid) along with its children and attaching it to the i-th node itself. The subtree is guaranteed to have atmost N/2 nodes (as checked earlier) and so won't cause a problem now.
Happy Coding..... :)
Related
We're given a directed tree to work with. We define the concepts of p-ancestor and p-cousin as follows
p-ancestor: A node is an 1-ancestor of another if it is the parent of it. It is the p-ancestor of a node, if it is the parent of the (p-1)-th ancestor.
p-cousin: A node is the p-cousin of another, if they share the same p-ancestor.
For example, consider the tree below.
4 has three 1-cousins i,e, 3, 4 and 5 since they all share the common
1-ancestor, which is 1
For a particular tree, the problem is as follows. You are given multiple pairs of (node,p) and are supposed to count (and output) the number of p-cousins of the corresponding nodes.
A slow algorithm would be to crawl up to the p-ancestor and run a BFS for each node.
What is the (asymptotically) fastest way to solve the problem?
If an off-line solution is acceptable, two Depth first searches can do the job.
Assume that we can index all of those n queries (node, p) from 0 to n - 1
We can convert each query (node, p) into another type of query (ancestor , p) as follow:
Answer for query (node, p), with node has level a (distance from root to this node is a), is the number of descendants level a of the ancestor at level a - p. So, for each queries, we can find who is that ancestor:
Pseudo code
dfs(int node, int level, int[]path, int[] ancestorForQuery, List<Query>[]data){
path[level] = node;
visit all child node;
for(Query query : data[node])
if(query.p <= level)
ancestorForQuery[query.index] = path[level - p];
}
Now, after the first DFS, instead of the original query, we have a new type of query (ancestor, p)
Assume that we have an array count, which at index i stores the number of node which has level i. Assume that, node a at level x , we need to count number of p descendants, so, the result for this query is:
query result = count[x + p] after we visit a - count[x + p] before we visit a
Pseudo code
dfs2(int node, int level, int[] result, int[]count, List<TransformedQuery>[]data){
count[level] ++;
for(TransformedQuery query : data[node]){
result[query.index] -= count[level + query.p];
}
visit all child node;
for(TransformedQuery query : data[node]){
result[query.index] += count[level + query.p];
}
}
Result of each query is stored in result array.
If p is fixed, I suggest the following algorithm:
Let's say that count[v] is number of p-children of v. Initially all count[v] are set to 0. And pparent[v] is p-parent of v.
Let's now run a dfs on the tree and keep the stack of visited nodes, i.e. when we visit some v, we put it into the stack. Once we leave v, we pop.
Suppose we've come to some node v in our dfs. Let's do count[stack[size - p]]++, indicating that we are a p-child of v. Also pparent[v] = stack[size-p]
Once your dfs is finished, you can calculate the desired number of p-cousins of v like this:
count[pparent[v]]
The complexity of this is O(n + m) for dfs and O(1) for each query
First I'll describe a fairly simple way to answer each query in O(p) time that uses O(n) preprocessing time and space, and then mention a way that query times can be sped up to O(log p) time for a factor of just O(log n) extra preprocessing time and space.
O(p)-time query algorithm
The basic idea is that if we write out the sequence of nodes visited during a DFS traversal of the tree in such a way that every node is written out at a vertical position corresponding to its level in the tree, then the set of p-cousins of a node form a horizontal interval in this diagram. Note that this "writing out" looks very much like a typical tree diagram, except without lines connecting nodes, and (if a postorder traversal is used; preorder would be just as good) parent nodes always appearing to the right of their children. So given a query (v, p), what we will do is essentially:
Find the p-th ancestor u of the given node v. Naively this takes O(p) time.
Find the p-th left-descendant l of u -- that is, the node you reach after repeating the process of visiting the leftmost child of the current node, p times. Naively this takes O(p) time.
Find the p-th right-descendant r of u (defined similarly). Naively this takes O(p) time.
Return the value x[r] - x[l] + 1, where x[i] is a precalculated value that records the number of nodes in the sequence described above that are at the same level as, and at or to the left of, node i. This takes constant time.
The preprocessing step is where we calculate x[i], for each 1 <= i <= n. This is accomplished by performing a DFS that builds up a second array y[] that records the number y[d] of nodes visited so far at depth d. Specifically, y[d] is initially 0 for each d; during the DFS, when we visit a node v at depth d, we simply increment y[d] and then set x[v] = y[d].
O(log p)-time query algorithm
The above algorithm should already be fast enough if the tree is fairly balanced -- but in the worst case, when each node has just a single child, O(p) = O(n). Notice that it is navigating up and down the tree in the first 3 of the above 4 steps that force O(p) time -- the last step takes constant time.
To fix this, we can add some extra pointers to make navigating up and down the tree faster. A simple and flexible way uses "pointer doubling": For each node v, we will store log2(depth(v)) pointers to successively higher ancestors. To populate these pointers, we perform log2(maxDepth) DFS iterations, where on the i-th iteration we set each node v's i-th ancestor pointer to its (i-1)-th ancestor's (i-1)-th ancestor: this takes just two pointer lookups per node per DFS. With these pointers, moving any distance p up the tree always takes at most log(p) jumps, because the distance can be reduced by at least half on each jump. The exact same procedure can be used to populate corresponding lists of pointers for "left-descendants" and "right-descendants" to speed up steps 2 and 3, respectively, to O(log p) time.
I want to count the number of nodes in a Complete Binary tree but all I can think of is traversing the entire tree. This will be a O(n) algorithm where n is the number of nodes in the tree. what could be the most efficient algorithm to achieve this?
Suppose that we start off by walking down the left and right spines of the tree to determine their heights. We'll either find that they're the same, in which case the last row is full, or we'll find that they're different. If the heights come back the same (say the height is h), then we know that there are 2h - 1 nodes and we're done. (refer figure below for reference)
Otherwise, the heights must be h+1 and h, respectively. We know that there are then at least 2h - 1 nodes, plus the number of nodes in the bottom layer of the tree. The question, then, is how to figure that out. One way to do this would be to find the rightmost node in the last layer. If you know at which index that node is, you know exactly how many nodes are in the last layer, so you can add that to 2h - 1 and you're done.
If you have a complete binary tree with left height h+1, then there are between 1 and 2h - 1 possible nodes that could be in the last layer. The question is then how to determine this as efficiently as possible.
Fortunately, since we know the nodes in the last layer get filled in from the left to the right, we can use binary search to try to figure out where the last filled node in the last layer is. Essentially, we guess the index where it might be, walk from the root of the tree down to where that leaf should be, and then either find a node there (so we know that the rightmost node in the bottom layer is either that node or to the right) or we don't (so we know that the rightmost node in the bottom layer must purely be to the right of the current location). We can walk down to where the kth node in the bottom layer should be by using the bits in the number k to guide a search down: we start at the root, then go left if the first bit of k is 0 and right if the first bit of k is 1, then use the remaining bits in a corresponding manner to walk down the tree. The total number of times we'll do this is O(h) and each probe takes time O(h), so the total work done here is O(h2). Since h is the height of the tree, we know that h = O(log n), so this algorithm takes time O(log2 n) time to complete.
I'm not sure whether it's possible to improve upon this algorithm. I can get an Ω(log n) lower bound on any correct algorithm, though. I'm going to argue that any algorithm that is always correct in all cases must inspect the rightmost leaf node in the final row of the tree. To see why, suppose there's a tree T where the algorithm doesn't do this. Let's suppose that the rightmost node that the algorithm inspects in the bottom row is x, that the actual rightmost node in the bottom row is y, and that the leftmost missing node in the bottom row that the algorithm detected is z. We know that x must be to the left of y (because the algorithm didn't inspect the leftmost node in the bottom row) and that y must be to the left of z (because y exists and z doesn't, so z must be further to the right than y). If you think about what the algorithm's "knowledge" is at this point, the algorithm has no idea whether or not there are any nodes purely to the right of x or purely to the left of z. Therefore, if we were to give it a modified tree T' where we deleted y, the algorithm wouldn't notice that anything had changed and would have exactly the same execution path on T and T'. However, since T and T' have a different number of nodes, the algorithm has to be wrong on at least one of them. Inspecting this node takes time at least Ω(log n) because of the time required to walk down the tree.
In short, you can do better than O(n) with the above O(log2 n)-time algorithm, and you might be able to do even better than that, though I'm not entirely sure how or whether that's possible. I suspect it isn't because I suspect that binary search is the optimal way to check the bottom row and the lengths of the paths down to the nodes you'd probe, even after taking into account that they share nodes in common, is Θ(log2 n), but I'm not sure how to prove it.
Hope this helps!
Images source
public int leftHeight(TreeNode root){
int h=0;
while(root!=null){
root=root.left;
h++;
}
return h;
}
public int rightHeight(TreeNode root){
int h=0;
while(root!=null){
root=root.right;
h++;
}
return h;
}
public int countNodes(TreeNode root) {
if(root==null)
return 0;
int lh=leftHeight(root);
int rh=rightHeight(root);
if(lh==rh)
return (1<<lh)-1;
return countNodes(root.left)+countNodes(root.right)+1;
}
In each recursive call,we need to traverse along the left and right boundaries of the complete binary tree to compute the left and right height. If they are equal the tree is full with 2^h-1 nodes.Otherwise we recurse on the left subtree and right subtree. The first call is from the root (level=0) which take O(h) time to get left and right height.We have recurse till we get a subtree which is full binary tree.In worst case it can happen that the we go till the leaf node. So the complexity will be (h + (h-1) +(h-2) + ... + 0)= (h(h+1)/2)= O(h^2).Also space complexity is size of the call stack,which is O(h).
NOTE:For complete binary tree h=log(n).
If the binary tree is definitely complete (as opposed to 'nearly complete' or 'almost complete' as defined in the Wikipedia article) you should simply descend down one branch of the tree down to the leaf. This will be O(logn). Then sum the powers of two up to this depth. So 2^0 + 2^1... + 2^d
C# Sample might helps others. This is similar to the time complexity well explained above by templatetypedef
public int GetLeftHeight(TreeNode treeNode)
{
int heightCnt = 0;
while (treeNode != null)
{
heightCnt++;
treeNode = treeNode.LeftNode;
}
return heightCnt;
}
public int CountNodes(TreeNode treeNode)
{
int heightIndx = GetLeftHeight(treeNode);
int nodeCnt = 0;
while (treeNode != null)
{
int rightHeight = GetLeftHeight(treeNode.RightNode);
nodeCnt += (int)Math.Pow(2, rightHeight); //(1 << rh);
treeNode = (rightHeight == heightIndx - 1) ? treeNode.RightNode : treeNode.LeftNode;
heightIndx--;
}
return nodeCnt;
}
Using Recursion:
int countNodes(TreeNode* root) {
if (!root){
return 0;
}
else{
return countNodes(root->left)+countNodes(root->right)+1;
}
}
I have a non-rooted bidirectional unweighted non-binary tree. I know how to find the diameter of the tree, the greatest distance between any pair of points in the tree, but I'm interested in finding the number of pairs with that max distance. Is there an algorithm to find the number of pairs with diameter distance in better than O(V^2) time, where V is the number of nodes?
Thank you!
Yes, there's a linear-time algorithm that operates bottom-up and resembles the algorithm for just finding the diameter. Here's the signature in Java-ish pseudocode; I'll leave the algorithm itself as an exercise.
class Node {
Collection<Node> children;
}
class Result {
int height; // height of the tree
int num_deep_nodes; // number of nodes whose depth equals the height
int diameter; // length of the longest path inside the tree
int num_long_paths; // number of pairs of nodes at distance |diameter|
}
Result computeNumberOfLongPaths(Node root); // recursive
Yes there is an algorithm with O(V+E) time.It is simply a modified version of finding the diameter.
As we know we can find the diameter using two calls of BFS by first making first call on any node and then remembering the last node discovered u and running a second call BFS(u),and remembering the last node discovered ,say v.The distance between u and v gives us the diameter.
Coming to number of pairs with that max distance.
1.Before invoking the first BFS,initialize an array distance of length |V| and distance[s]=0.s is the starting vertex for first BFS call on any node.
2.In the BFS,modify the while loop as:
while(Q is not empty)
{
e=deque(Q);
for all vertices w adjacent to e
{
if(w is not visited)
{
enque(w)
mark w as visited
distance[w]=distance[e]+1
parent[w]=e
}
}
}
3.Like I said,remembering the last node visited,say u is that node. Now counting the number of vertices that are at the same level as vertex u. mark is an array of length n,which has all its value initialized to 0,0 implies that vertex not counted initially.
n1=0
for i = 1 to number of vertices
{
if(distance[i]==distance[u]&&mark[i]==0)
{
n1++
mark[i]=1/*vertex counted*/
}
}
n1 gives the number of vertices,that are at the same level as vertex u,now for all vertices that have mark[i] = 1 ,are marked and they will not be counted again.
4.Similarly before performing second BFS on u,initialize another array distance2 of length |V| and distance2[u]=0.
5.Run BFS(u) and again get the last node discovered say v
6.Repeat 3rd step,this time on distance2 array and taking a different variable say n2=0 and the condition being
if(distance2[i]==distance2[v]&&mark[i]==0)
n2++
else if(distance2[i]==distance2[v]&&mark[i]==1)
set_common=1
7.set_common is a global variable that is set when there are a set of vertices such that between any two vertices the path is that of a diameter and the first bfs did not mark all those vertices but did mark at least one of those that is why mark[i]==1.
Suppose that first bfs did mark all such vertices in first call then n2 would be = 0 and set_common would not be set and there is no need also.But this situation is same as above
In any case the number of pairs giving diameter are:=
(n+n2)combination2 - X=(n1+n2)!/((2!)((n1+n2-2)!)) - X
I will elaborate on what X is.Else the number of pairs are = n1*n2,which is the case when 2 disjoint set of vertices are giving the diameter
So the Condition used is
if(n2==0||set_common==1)
number_of_pairs=(n1+n2)C2-X
else n1*n2
Now talking about X.It can occur that the vertices that are marked may have common parent.In that case we must not count there combinations.So before using the above condition it is advised to run the following algorithm
X=0/*Initialize*/
for(i = 1 to number of vertices)
{
s = 0,p = -1
if(mark[i]==0)
continue
else
{
s++
if(p==-1)
p=parent[i]
while((i+1)<=number_of_vertices&& p==parent[i+1])
{s++;i++}
}
if(s>1)
X=X+sC2
}
Proof of correctness
It is very easy.Since BFS traverses a tree level by level,n1 will give you the number of vertices at the level of u and n2 gives you the number of vertices at the level of v and since the distance between u and v = diameter.Therefore, distance between any vertex on level of u and any vertex on level of v will be equal to diameter.
The time taken is 2(|V|) + 2*time_of_DFS=O(V+E).
I had a test today (Data Structures course), and one of the questions was the following:
Given an undirected, non-weighted graph G=(V,E), you need to write an algorithm that for a given node s, returns the shortest path from s to all the nodes v' in the complement graph.
A Complement Graph G'=(E',V') contains an edge between any to nodes in G that don't share an edge, and only those.
The algorithm needs to run in O(V+E) (of the original graph).
I asked 50 different students, and not even one of them solved it correctly.
any Ideas?
Thanks a lot,
Barak.
The course staff have published the official answers to the test.
The answer is:
"The algorithm is based on a BFS with a few adaptations.
For each node in the graph we will add 2 fields - next and prev. Using these two fields we can maintain two Doubly-Linked lists of nodes: L1,L2.
At the beginning of every iteration of the algorithm, L1 has all the while nodes in the graph, and L2 is empty.
The BFS code (without the initialization) is:
At the ending of the loop at lines 3-5, L1 contains all the white nodes that aren't adjacent to u in G, or in other words, all the white nodes that are adjacent to u in the complement graph.
Therefore the runtime of the algorithm equals to the runtime of the original BFS on the complement graph.
The time is O(V+E) because lines 4-5 are executed at most 2E times, and lines 7-9 are executed at most V times (Every node can get out of L1 only once)."
Note: this is the original solution translated from Hebrew.
I Hope you find it helpful, and thank you all for helping me out,
Barak.
I would like to propose a different approach.
Initialization:-
Create a list of undiscovered edges. Let's call it undiscovered and initialize it with all nodes.
Then we will run a modified version of BFS
Create a Queue(Q) and add start node to it
Main algo
while undiscovered.size()>0 && Queue not Empty
curr_node = DEQUEUE(Queue)
create a list of all edges in the complement graph(Lets call it
complement_edges). This can be created by looping through all the
nodes in undiscovered and checking whether it is connected to
curr_node.
Then loop through each node in complement_edges perform 3
operation
Update distance if optimal
remove this node from undiscovered
ENQUEUE(Queue, this node)
Some things to note here,
If the initial graph is sparse, then the undiscovered will become empty very fast.
During implementation, use hashing to store edges in graph, this will make step 2 fast.
Heres the sample code:-
HashSet<Integer> adjList[]; // graph stored as adjancency list
public int[] calc_distance(int start){
HashSet<Integer> undiscovered = new HashSet<>();
for(int i=1;i<=N;i++){
undiscovered.add(i);
}
int[] dist = new int[N+1];
Arrays.fill(dist, Integer.MAX_VALUE/4);
Queue<Integer> q = new LinkedList<>();
q.add(start);
dist[start] = 0;
while(!q.isEmpty() && undiscovered.size()>0){
int curr = q.poll();
LinkedList<Integer> complement_edges = new LinkedList<>();
for(int child : undiscovered){
if(!adjList[curr].contains(child)){
// curr and child is connected in complement
complement_edges.add(child);
}
}
for(int child : complement_edges){
if(dist[child]>(dist[curr]+1)){
dist[child] = dist[curr]+1;
}
// remove complement_edges from undiscovered
undiscovered.remove(child);
q.add(child);
}
}
return dist;
}
}
UPDATE
I worked out an algorithm that I think runs in O(n*k) running time. Below is the pseudo-code:
routine heaviestKPath( T, k )
// create 2D matrix with n rows and k columns with each element = -∞
// we make it size k+1 because the 0th column must be all 0s for a later
// function to work properly and simplicity in our algorithm
matrix = new array[ T.getVertexCount() ][ k + 1 ] (-∞);
// set all elements in the first column of this matrix = 0
matrix[ n ][ 0 ] = 0;
// fill our matrix by traversing the tree
traverseToFillMatrix( T.root, k );
// consider a path that would arc over a node
globalMaxWeight = -∞;
findArcs( T.root, k );
return globalMaxWeight
end routine
// node = the current node; k = the path length; node.lc = node’s left child;
// node.rc = node’s right child; node.idx = node’s index (row) in the matrix;
// node.lc.wt/node.rc.wt = weight of the edge to left/right child;
routine traverseToFillMatrix( node, k )
if (node == null) return;
traverseToFillMatrix(node.lc, k ); // recurse left
traverseToFillMatrix(node.rc, k ); // recurse right
// in the case that a left/right child doesn’t exist, or both,
// let’s assume the code is smart enough to handle these cases
matrix[ node.idx ][ 1 ] = max( node.lc.wt, node.rc.wt );
for i = 2 to k {
// max returns the heavier of the 2 paths
matrix[node.idx][i] = max( matrix[node.lc.idx][i-1] + node.lc.wt,
matrix[node.rc.idx][i-1] + node.rc.wt);
}
end routine
// node = the current node, k = the path length
routine findArcs( node, k )
if (node == null) return;
nodeMax = matrix[node.idx][k];
longPath = path[node.idx][k];
i = 1;
j = k-1;
while ( i+j == k AND i < k ) {
left = node.lc.wt + matrix[node.lc.idx][i-1];
right = node.rc.wt + matrix[node.rc.idx][j-1];
if ( left + right > nodeMax ) {
nodeMax = left + right;
}
i++; j--;
}
// if this node’s max weight is larger than the global max weight, update
if ( globalMaxWeight < nodeMax ) {
globalMaxWeight = nodeMax;
}
findArcs( node.lc, k ); // recurse left
findArcs( node.rc, k ); // recurse right
end routine
Let me know what you think. Feedback is welcome.
I think have come up with two naive algorithms that find the heaviest length-constrained path in a weighted Binary Tree. Firstly, the description of the algorithm is as follows: given an n-vertex Binary Tree with weighted edges and some value k, find the heaviest path of length k.
For both algorithms, I'll need a reference to all vertices so I'll just do a simple traversal of the Tree to have a reference to all vertices, with each vertex having a reference to its left, right, and parent nodes in the tree.
Algorithm 1
For this algorithm, I'm basically planning on running DFS from each node in the Tree, with consideration to the fixed path length. In addition, since the path I'm looking for has the potential of going from left subtree to root to right subtree, I will have to consider 3 choices at each node. But this will result in a O(n*3^k) algorithm and I don't like that.
Algorithm 2
I'm essentially thinking about using a modified version of Dijkstra's Algorithm in order to consider a fixed path length. Since I'm looking for heaviest and Dijkstra's Algorithm finds the lightest, I'm planning on negating all edge weights before starting the traversal. Actually... this doesn't make sense since I'd have to run Dijkstra's on each node and that doesn't seem very efficient much better than the above algorithm.
So I guess my main questions are several. Firstly, do the algorithms I've described above solve the problem at hand? I'm not totally certain the Dijkstra's version will work as Dijkstra's is meant for positive edge values.
Now, I am sure there exist more clever/efficient algorithms for this... what is a better algorithm? I've read about "Using spine decompositions to efficiently solve the length-constrained heaviest path problem for trees" but that is really complicated and I don't understand it at all. Are there other algorithms that tackle this problem, maybe not as efficiently as spine decomposition but easier to understand?
You could use a DFS downwards from each node that stops after k edges to search for paths, but notice that this will do 2^k work at each node for a total of O(n*2^k) work, since the number of paths doubles at each level you go down from the starting node.
As DasBoot says in a comment, there is no advantage to using Dijkstra's algorithm here since it's cleverness amounts to choosing the shortest (or longest) way to get between 2 points when multiple routes are possible. With a tree there is always exactly 1 way.
I have a dynamic programming algorithm in mind that will require O(nk) time. Here are some hints:
If you choose some leaf vertex to be the root r and direct all other vertices downwards, away from the root, notice that every path in this directed tree has a highest node -- that is, a unique node that is nearest to r.
You can calculate the heaviest length-k path overall by going through each node v and calculating the heaviest length-k path whose highest node is v, finally taking the maximum over all nodes.
A length-k path whose highest node is v must have a length-i path descending towards one child and a length-(k-i) path descending towards the other.
That should be enough to get you thinking in the right direction; let me know if you need further help.
Here's my solution. Feedback is welcome.
Lets treat the binary tree as a directed graph, with edges going from parent to children. Lets define two concepts for each vertex v:
a) an arc: which is a directed path, that is, it starts from vertex v, and all vertices in the path are children of the starting vertex v.
b) a child-path: which is a directed or non-directed path containing v, that is, it could start anywhere, end anywhere, and go from child of v to v, and then, say to its other child. The set of arcs is a subset of the set of child-paths.
We also define a function HeaviestArc(v,j), which gives, for a vertex j, the heaviest arc, on the left or right side, of length j, starting at v. We also define LeftHeaviest(v,j), and RightHeaviest(v,j) as the heaviest left and right arcs of length j respectively.
Given this, we can define the following recurrences for each vertex v, based on its children:
LeftHeaviest(v,j) = weight(LeftEdge(v)) + HeaviestArc(LeftChild(v)),j-1);
RightHeaviest(v,j) = weight(RightEdge(v)) + HeaviestArc(RightChild(v)),j-1);
HeaviestArc(v,j) = max(LeftHeaviest(v,j),RightHeaviest(v,j));
Here j here goes from 1 to k, and HeaviestArc(v,0)=LeftHeaviest(v,0),RightHeaviest(v,0)=0 for all. For leaf nodes, HeaviestArc(v,0) = 0, and HeaviestArc(v,j)=-inf for all other j (I need to think about corner cases more thoroughly).
And then HeaviestChildPath(v), the heaviest child-path containing v, can be calculated as:
HeaviestChildPath(v) = max{ for j = 0 to k LeftHeaviest(j) + RightHeaviest(k-j)}
The heaviest path should be the heaviest of all child paths.
The estimated runtime of the algorithm should be order O(kn).
def traverse(node, running_weight, level):
if level == 0:
if max_weight < running_weight:
max_weight = running_weight
return
traverse(node->left,running_weight+node.weight,level-1)
traverse(node->right,running_weight+node.weight,level-1)
traverse(node->parent,running_weight+node.weight,level-1)
max_weight = 0
for node in tree:
traverse(node,0,N)