Given a undirected and connected graph G, find a spanning tree whose diameter is the minimum.
singhsumit linked the relevant paper by Hassin and Tamir, entitled "On the minimum diameter spanning tree problem", but his answer is currently deleted. The main idea from the paper is that finding a minimum diameter spanning tree in an undirected graph can be accomplished by finding the "absolute 1-center" of the graph and returning a shortest path tree rooted there.
The absolute 1-center is the point, either on a vertex or an edge, from which the distance to the furthest vertex is minimum. This can be found via an algorithm of Kariv and Hakimi (An Algorithmic Approach to Network Location Problems. I: the p-Centers) or an earlier algorithm of Hakimi, Schmeichel, and Pierce (On p-Centers in Networks), which I will attempt to reconstruct from just the running time and decades of hindsight. (Stupid pay walls.)
Use Floyd--Warshall or Johnson's algorithm to compute all-pairs distances. For each edge u--v, find the best candidate for a 1-center on that edge as follows.
Let the points on the edge u--v be indexed by µ ranging from 0 (u itself) to len(u--v) (v itself). The distance from the point at index µ to a vertex w is
min(µ + d(u, w), len(u--v) - µ + d(v, w)).
As a function of µ, this quantity is increasing and then decreasing, with the maximum at
µ = (len(u--v) + d(v, w) - d(u, w))/2.
Sort the vertices by this argmax. For each partition of the array into a left subarray and a right subarray, compute the interval [a, b] of µ that induce the same argmax partition. Intersect this interval to [0, len(u--v)]; if the intersection is empty, then move on. Otherwise, find the maximum distance L in the left subarray from the point on u--v indexed by a, and the maximum distance R in the right subarray from the point on u--v indexed by b. (The cost of computing these maximums can be amortized to O(1) for each partition, by scanning left-to-right and right-to-left at the beginning.) The best choice is the µ in [a, b] that minimizes max(L - (µ - a), R + (b - µ)).
Related
Given a weighed, connected and directed graph G=(V,E) with n vertexes and m edges, and given a pre-calculated shortest path distance's matrix S where S is n*n S(i,j) denotes the weight of shortest path from vertex i to vertex j.
we know just weight of one edge (u, v) is changed (increased or decreased).
for two specific vertex s and t we want to update the shortest path length between these two vertex.
This can be done in O(1).
How is this possible? what is the trick of this answer?
You certainly can for decreases. I assume S will always refer to the old distances. Let l be the new distance between (u, v). Check if
S(s, u) + l + S(v, t) < S(s, t)
if yes then the left hand side is the new optimal distance between s and t.
Increases are impossible. Consider the following graph (edges in red have zero weight):
Suppose m is the minimum weight edge here, except for (u, v) which used to be lower. Now we update (u, v) to some weight l > m. This means we must find m to find the new optimum length.
Suppose we could do this in O(1) time. Then it means we could find the minimum of any array in O(1) time by feeding it into this algorithm after adding (u, v) with weight -BIGNUMBER and then 'updating' it to BIGNUMBER (we can lazily construct the distance matrix because all distances are either 0, inf or just the edge weights). That is clearly not possible, thus we can't solve this problem in O(1) either.
Given a tree with N vertices and a positive number K. Find the number of distinct pairs of the vertices which have a distance of exactly K between them. Note that pairs (v, u) and (u, v) are considered to be the same pair (1 ≤ N ≤ 50000, 1 ≤ K ≤ 500).
I am not able to find an optimum solution for this. I can do BFS from each vertex and count the no of vertices that is reachable from that and having distance less than or equal to K. But then in worst case the complexity will be order of 2. Is there any faster way around??
You can achieve that in more simple way.
Run DFS on tree and for each vertex calculate the distance from the root - save those in array (access by o(1)).
For each pair of vertex in your graph:
Find their LCA (Lowest common ancestor there are algorithm to do that in 0(1)).
Assume u and v are 2 arbitrary vertices and w is their LCA -> subtract the distance from the w to the root from u to the root - now you have the distance between u and w. Do the same for v -> with o(1) you got the distance for (v,w) and (u,w) -> sum them together and you get the (v,u) distance - now all you have to do is compare to K.
Final complexity is o(n^2)
Improving upon the other answer's approach, we can make some key observations.
To calculate distances between two nodes you need their LCA(Lowest Common Ancestor) and depths as the other answer is trying to do. The formula used here is:
Dist(u, v) = depth[u] + depth[v] - 2 * depth[ lca(u, v) ]
Depth[x] denotes distance of x from root, precomputed using DFS once starting from root node.
Now here comes the key observation, you already have distance value K, assume that dist(u, v) = K using this assumption calculate(predict?) depth of LCA. By putting K in above formula we get:
depth[ lca(u, v) ] = (depth[u] + depth[v] - K) / 2
Now that you have depth of LCA you know that distance between u and lca is depth[u] - depth[ lca(u, v) ] and between v and lca is depth[v] - depth[ lca(u, v) ], let this be X and Y respectively.
Now we know that LCA is the lowest common ancestor thus, the Xth parent of u and Yth parent of v should be the LCA, so now if Xth parent of u and Yth parent of v is indeed the same node then we can say that our pre-assumption about distances between the nodes was true and the distance between the two nodes is K.
You can caluculate the Xth and Yth ancestor of the nodes in O(logN) complexity using Binary Lifting Algorithm with a preprocessing of O(NLogN) time, this preprocessing can be included directly in your DFS when a node is visited for the first time.
Corner Cases:
Calcuated depth of LCA should not be a fraction or negative.
If depth of any node u or v matches the calculated depth of the node then that node is the ancestor of the other node.
Consider this tree:
Assuming K = 4, we get that depth[lca] = 1 using the formula above, and if we get the Xth and Yth ancestor of u and v we will get the same node 1, which should validate our assumption but this is not true since the distance between u and v is actually 2 as visible in the picture above. This is because LCA in this case is actually 2, to handle this case calcuate X-1th and Y-1th ancestor of u and v too, respectively and check if they are different.
Final Complexity: O(NlogN)
I have seen this problem before for unweighted trees and trees with positive edge weights, but have not seen a solution for trees that could have negative weights.
For reference, the center of a tree is defined as the vertex that minimizes the maximum distance to any other vertex.
It is exactly the same as a dynamic programming solution for a tree with positive edge weights. Let's run depth first search two times(we can pick an arbitrary vertex as a root). During the first phase, we will compute distIn(v) = the longest distance from v to a vertex from the v's subtree. I think this part is trivial(I can elaborate on this if necessary). During the second phase, we will compute the furthest vertex which is not inside the v's subtree for all v. Let's call it distOut(v). Here is a pseudo code for it:
void computeDistOut(v, pathFromParent)
distOut(v) = pathFromParent
childDists = empty list
for c : children of v
childDists.add(distIn(c) + cost(v, c))
for c : children of v
maxDist = max among all distances from childDists except (distIn(c) + cost(v, c))
computeDistOut(c, max(pathFromParent, maxDist) + cost(v, c))
The maximum distance from each vertex is max(distIn(v), distOut(v)). Now we can just pick a vertex that minimizes this value(it is a center by the definition).
About the time complexity: it is linear if this solution is implemented properly. Instead of maintaining a list of distances to all children(childDists in pseudo code), we can just store two maximum values among them. It allows us to get the maxDist value in O(1)(it is either the first or the second maximum).
I couldnt get my head around this question, need help just a direction would be great..
Let there be a rooted directed tree T, not necessarily binary.
There is a weight associated with each vertex, such that the weight of a vertex is greater than the weight of the vertex's parent.
Each vertex can be designated as either a regular or a pivot vertex.
The cost of a pivot vertex is the same as its weight.
Regular vertices get discounts: their cost is their weight minus the weight of the closest ancestor that is a pivot vertex.
Thus, selecting a vertex as a pivot vertex may increase its cost, but it will decrease the costs of some of its descendants.
If a regular vertex has no ancestor which is a pivot vertex, then its cost is its weight itself. There is no limit on the number of pivot vertices.
I have to design an efficient algorithm to designate every vertex as either a regular vertex or a pivot vertex, such that the total cost of all vertices is minimized.
Here's another dynamic program. I'm going to assume that the root must be a pivot node. For each node u, let W[u] be the weight of u. For nodes u and v such that u = v or u is a proper descendant of v, let C[u, v] be the optimum cost of the subtree rooted at u given that u's leafmost pivot ancestor is v. Then we have a recurrence
C[u, u] = W[u] + sum over children t of u of C[t, u];
C[u, v] | u is a proper descendant of v
= (W[u] - W[v]) + sum over children t of u of min(C[t, t], C[t, v]).
There is no separate base case because the sum may be empty. The running time is O(number of descendant-ancestor pairs), which is O(n^2).
You could use dynamic programming where DP[i,k] represents the smallest cost of the subtree rooted at vertex i assuming that looking into the subtree we can see k regular vertices (the concept is that pivot nodes are opaque, while regular nodes are transparent).
When working out the cost we use the normal logic everywhere except these k regular vertices. For the k vertices we assign the cost of the vertex, but have not yet applied the discount.
The point is that when we assign a pivot node above this subtree of weight x, we can instantly calculate the final cost of the subtree by applying k times the discount x.
It is not clear whether this is efficient enough for your case because you have not said how large your graph is.
There will be O(n^2) entries in the dynamic programming table (where n is the number of vertices).
I would expect the recurrence relation to take O(n) to compute, so this will give an overall complexity of O(n^3).
Here is an excise:
In certain graph problems, vertices have can have weights instead of
or in addi- tion to the weights of edges. Let Cv be the cost of vertex
v, and C(x,y) the cost of the edge (x, y). This problem is concerned
with finding the cheapest path between vertices a and b in a graph G.
The cost of a path is the sum of the costs of the edges and vertices
encountered on the path.
(a) Suppose that each edge in the graph has a weight of zero (while
non-edges have a cost of ∞).Assume that Cv =1 for all vertices 1≤v≤n
(i.e.,all vertices have the same cost). Give an efficient algorithm to
find the cheapest path from a to b and its time complexity.
(b) Now suppose that the vertex costs are not constant (but are all
positive) and the edge costs remain as above. Give an efficient
algorithm to find the cheapest path from a to b and its time
complexity.
(c) Now suppose that both the edge and vertex costs are not constant
(but are all positive). Give an efficient algorithm to find the
cheapest path from a to b and its time complexity.
Here is my answer:
(a) use normal BFS;
(b) Use dijkstra’s algorithm, but replace weight with vertex weight;
(c)
Also use dijkstra’s algorithm
If only considering about edge weight, then for the key part of dijkstra's algorithm, we have:
if (distance[y] > distance[v]+weight) {
distance[y] = distance[v]+weight; // weight is between v and y
}
Now, by considering about vertex weight, we have:
if (distance[y] > distance[v] + weight + vertexWeight[y]) {
distance[y] = distance[v] + weight + vertexWeight[y]; // weight is between v and y
}
Am I right?
I guess my answer to (c) is too simple, is it?
You are on the right track, and the solution is very simple.
In both B,C, Reduce the problem to normal dijkstra, which assumes no weights on the vertices.
For this, you will need to define w':E->R, a new weight function for edges.
w'(u,v) = w(u,v) + vertex_weight(v)
in (b) w(u,v) = 0 (or const), and the solution is robust to fit (c) as well!
The idea behind it is using an edge cost you the weight of the edge, and the cost of reaching the target vertice. The cost of the source was already paid, so you disregard it1.
Reducing a problem, instead of changing an algorithm is usually much simpler to use, prove and analyze!.
(1) In this solution you "miss" the weight of the source, so the shortest path from s to t will be: dijkstra(s,t,w') + vertex_weight(s)_ [where dijkstra(s,t,w') is the distance from s to t using out w'
The vertex weight can be removed by slicing every vertex a in two vertices a1 and a2 with an edge from a1 to a2 with the weight of a.
I think you are right for the adaptation of dijkstra’s algorithm.