Create an algorithm for computing the shortest path with constraints on the vertices in O(m + nlogn) - algorithm

So I'm trying to write an algorithm for computing the shortest path with constraints on the vertices you can visit in O(m + nlogn) time. In this problem, we are given an indirect weighted (non negative) graph G = (V, E), a set of vertices X ⊂ V, and two vertices s, t ∈ V \ X. The graph is given with adjacency lists and we can assume that it takes O(1) time to determine if a vertex is in X. I'm trying to write an algorithm which can compute the shortest path between s and t in G such that the path includes at most one vertex from X if such a path exists in O(m + nlogn) time. I know that this algorithm would require a modified Dijkstra's algorithm but I'm unsure how to proceed from here. Could anyone please help me out? Thank you

Construct a new graph by taking two disjoint copies of your graph G (call them G0 and G1), and for each edge (v, x) in G (with x in X), add an additional edge in the combined graph from v in G0 to x in G1. This new graph has twice as many vertices as G, and at most three times as many edges.
The shortest path from s in G0 to t in G1 is the shortest path from s to t in G going through at least one vertex of X.
On this new graph, Dijkstra's algorithm (with the priority queue implemented with a fibonacci heap) works in time O(3m + 2n log 2n) = O(m + n log n).

A possibility is to double the number of vertices not in X.
For each vertex v not in X, you create v0 and v1: v0 is accessible only without passing from a vertex in X, v1 is accessible by passing through one and only one vertex in X.
Let us call w another vertex. Then:
if w is in X, v not in X:
length (w, v0) = infinite and dist(v1) = min (dist(v1), dist(w) + length(w, v))
if w is in X, v in X:
length (w, v) = infinite
if w is not in X, v not in X:
dist (v0) = min (dist(v0), dist (w0) + length (w, v))
dist (v1) = min (dist(v1), dist (w1) + length (w, v))
if w is not in X, v is in X:
dist (v) = min (dist(v), dist (w0) + length (w, v))

Related

Find the N highest cost vertices that has a path to S, where S is a vertex in an undirected Graph G

I would like to know, what would be the most efficient way (w.r.t., Space and Time) to solve the following problem:
Given an undirected Graph G = (V, E), a positive number N and a vertex S in V. Assume that every vertex in V has a cost value. Find the N highest cost vertices that is connected to S.
For example:
G = (V, E)
V = {v1, v2, v3, v4},
E = {(v1, v2),
(v1, v3),
(v2, v4),
(v3, v4)}
v1 cost = 1
v2 cost = 2
v3 cost = 3
v4 cost = 4
N = 2, S = v1
result: {v3, v4}
This problem can be solved easily by the graph traversal algorithm (e.g., BFS or DFS). To find the vertices connected to S, we can run either BFS or DFS starting from S. As the space and time complexity of BFS and DFS is same (i.e., time complexity: O(V+E), space complexity: O(E)), here I am going to show the pseudocode using DFS:
Parameter Definition:
* G -> Graph
* S -> Starting node
* N -> Number of connected (highest cost) vertices to find
* Cost -> Array of size V, contains the vertex cost value
procedure DFS-traversal(G,S,N,Cost):
let St be a stack
let Q be a min-priority-queue contains <cost, vertex-id>
let discovered is an array (of size V) to mark already visited vertices
St.push(S)
// Comment: if you do not want to consider the case "S is connected to S"
// then, you can consider commenting the following line
Q.push(make-pair(S, Cost[S]))
label S as discovered
while St is not empty
v = St.pop()
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as discovered:
label w as discovered
St.push(w)
Q.push(make-pair(w, Cost[w]))
if Q.size() == N + 1:
Q.pop()
let ret is a N sized array
while Q is not empty:
ret.append(Q.top().second)
Q.pop()
Let's first describe the process first. Here, I run the iterative version of DFS to traverse the graph starting from S. During the traversal, I use a priority-queue to keep the N highest cost vertices that is reachable from S. Instead of the priority-queue, we can use a simple array (or even we can reuse the discovered array) to keep the record of the reachable vertices with cost.
Analysis of space-complexity:
To store the graph: O(E)
Priority-queue: O(N)
Stack: O(V)
For labeling discovered: O(V)
So, as O(E) is the dominating term here, we can consider O(E) as the overall space complexity.
Analysis of time-complexity:
DFS-traversal: O(V+E)
To track N highest cost vertices:
By maintaining priority-queue: O(V*logN)
Or alternatively using array: O(V*logV)
The overall time-complexity would be: O(V*logN + E) or O(V*logV + E)

Find the Optimal vertex cover of a tree with k blue vertices

I need to find a 'dynamic - programming' kind of solution for the following problem:
Input:
Perfect Binary-Tree, T = (V,E) - (each node has exactly 2 children except the leafs).
V = V(blue) ∪ V(black).
V(blue) ∩ V(black) = ∅.
(In other words, some vertices in the tree are blue)
Root of the tree 'r'.
integer k
A legal Solution:
A subset of vertices V' ⊆ V which is a vertex cover of T, and |V' ∩ V(blue)| = k. (In other words, the cover V' contains k blue vertices)
Solution Value:
The value of a legal solution V' is the number of vertices in the set = |V'|.
For convenience, we will define the value of an "un-legal" solution to be ∞.
What we need to find:
A solution with minimal Value.
(In other words, The best solution is a solution which is a cover, contains exactly k blue vertices and the number of vertices in the set is minimal.)
I need to define a typical sub-problem. (Like, if i know what is the value solution of a sub tree I can use it to find my value solution to the problem.)
and suggest a formula to solve it.
To me, looks like you are on the right track!
Still, I think you will have to use an additional parameter to tell us how far is any picked vertex from the current subtree's root.
For example, it can be just the indication whether we pick the current vertex, as below.
Let fun (v, b, p) be the optimal size for subtree with root v such that, in this subtree, we pick exactly b blue vertices, and p = 1 if we pick vertex v or p = 0 if we don't.
The answer is the minimum of fun (r, k, 0) and fun (r, k, 1): we want the answer for the full tree (v = r), with exactly k vertices covered in blue (b = k), and we can either pick or not pick the root.
Now, how do we calculate this?
For the leaves, fun (v, 0, 0) is 0 and fun (v, t, 1) is 1, where t tells us whether vertex v is blue (1 if yes, 0 if no).
All other combinations are invalid, and we can simulate it by saying the respective values are positive infinities: for example, for a leaf vertex v, the value fun (v, 3, 1) = +infinity.
In the implementation, the infinity can be just any value greater than any possible answer.
For all internal vertices, let v be the current vertex and u and w be its children.
We have two options: to pick or not to pick the vertex v.
Suppose we pick it.
Then the value we get for f (v, b, 1) is 1 (the picked vertex v) plus the minimum of fun (u, x, q) + fun (w, y, r) such that x + y is either b if the vertex v is black or b - 1 if it is blue, and q and r can be arbitrary: if we picked the vertex v, the edges v--u and v--w are already covered by our vertex cover.
Now let us not pick the vertex v.
Then the value we get for f (v, b, 0) is just the minimum of fun (u, x, 1) + fun (w, y, 1) such that x + y = b: if we did not pick the vertex v, the edges v--u and v--w have to be covered by u and w.

Dijkstra’s algorithm to compute the shortest paths from a given source vertex s in O ((V+E) log W) time

G (V, E) is a weighted, directed graph with non negative weight function W : E -> {0, 1, 2, 3, 4... W } where W is any non negative integer. I want to modify the Dijkstra’s algorithm to compute the shortest paths from a given source vertex s in O ((V+E) log W) time.
Let's take a standard Dijkstra's algorithm with a priority queue and change it a little bit:
We'll have a map from an int (a distance) to a vector to keep all vertices with the given distance.
We'll pop the element from the first vector to get the current vertex at each iteration and delete a key if the vector becomes empty.
We'll push an element to a corresponding vector when we add it to the queue.
There are at most W + 1 different keys in the map at any point. Why? Let v be the current vertex and d[v] be its distance. All keys less than d[v] were deleted. Any undiscovered vertex is not in the map. For any discovered vertex u d[u] <= d[v] + max_edge_lenght = d[v] + W. Thus, any key in the map must be in the [d[v], d[v] + W] range.
The size of the map is O(W) so every operation with works in O(log W) time (pushing/popping an element from the vector is O(1)).
Of course, it's useful only as long as W is relatively small (otherwise, you can go with a standard O((V + E) log V) implementation).

Proof of correctness: Algorithm for diameter of a tree in graph theory

In order to find the diameter of a tree I can take any node from the tree, perform BFS to find a node which is farthest away from it and then perform BFS on that node. The greatest distance from the second BFS will yield the diameter.
I am not sure how to prove this, though? I have tried using induction on the number of nodes, but there are too many cases.
Any ideas would be much appreciated...
Let's call the endpoint found by the first BFS x. The crucial step is proving that the x found in this first step always "works" -- that is, that it is always at one end of some longest path. (Note that in general there can be more than one equally-longest path.) If we can establish this, it's straightforward to see that a BFS rooted at x will find some node as far as possible from x, which must therefore be an overall longest path.
Hint: Suppose (to the contrary) that there is a longer path between two vertices u and v, neither of which is x.
Observe that, on the unique path between u and v, there must be some highest (closest to the root) vertex h. There are two possibilities: either h is on the path from the root of the BFS to x, or it is not. Show a contradiction by showing that in both cases, the u-v path can be made at least as long by replacing some path segment in it with a path to x.
[EDIT] Actually, it may not be necessary to treat the 2 cases separately after all. But I often find it easier to break a configuration into several (or even many) cases, and treat each one separately. Here, the case where h is on the path from the BFS root to x is easier to handle, and gives a clue for the other case.
[EDIT 2] Coming back to this later, it now seems to me that the two cases that need to be considered are (i) the u-v path intersects the path from the root to x (at some vertex y, not necessarily at the u-v path's highest point h); and (ii) it doesn't. We still need h to prove each case.
I'm going to work out j_random_hacker's hint. Let s, t be a maximally distant pair. Let u be the arbitrary vertex. We have a schematic like
u
|
|
|
x
/ \
/ \
/ \
s t ,
where x is the junction of s, t, u (i.e. the unique vertex that lies on each of the three paths between these vertices).
Suppose that v is a vertex maximally distant from u. If the schematic now looks like
u
|
|
|
x v
/ \ /
/ *
/ \
s t ,
then
d(s, t) = d(s, x) + d(x, t) <= d(s, x) + d(x, v) = d(s, v),
where the inequality holds because d(u, t) = d(u, x) + d(x, t) and d(u, v) = d(u, x) + d(x, v). There is a symmetric case where v attaches between s and x instead of between x and t.
The other case looks like
u
|
*---v
|
x
/ \
/ \
/ \
s t .
Now,
d(u, s) <= d(u, v) <= d(u, x) + d(x, v)
d(u, t) <= d(u, v) <= d(u, x) + d(x, v)
d(s, t) = d(s, x) + d(x, t)
= d(u, s) + d(u, t) - 2 d(u, x)
<= 2 d(x, v)
2 d(s, t) <= d(s, t) + 2 d(x, v)
= d(s, x) + d(x, v) + d(v, x) + d(x, t)
= d(v, s) + d(v, t),
so max(d(v, s), d(v, t)) >= d(s, t) by an averaging argument, and v belongs to a maximally distant pair.
Here's an alternative way to look at it:
Suppose G = ( V, E ) is a nonempty, finite tree with vertex set V and edge set E.
Consider the following algorithm:
Let count = 0. Let all edges in E initially be uncolored. Let C initially be equal to V.
Consider the subset V' of V containing all vertices with exactly one uncolored edge:
if V' is empty then let d = count * 2, and stop.
if V' contains exactly two elements then color their mutual (uncolored) edge green, let d = count * 2 + 1, and stop.
otherwise, V' contains at least three vertices; proceed as follows:
Increment count by one.
Remove all vertices from C that have no uncolored edges.
For each vertex in V with two or more uncolored edges, re-color each of its green edges red (some vertices may have zero such edges).
For each vertex in V', color its uncolored edge green.
Return to step (2).
That basically colors the graph from the leaves inward, marking paths with maximal distance to a leaf in green and marking those with only shorter distances in red. Meanwhile, the nodes of C, the center, with shorter maximal distance to a leaf are pared away until C contains only the one or two nodes with the largest maximum distance to a leaf.
By construction, all simple paths from leaf vertices to their nearest center vertex that traverse only green edges are the same length (count), and all other simple paths from a leaf vertex to its nearest center vertex (traversing at least one red edge) are shorter. It can furthermore be proven that
this algorithm always terminates under the conditions given, leaving every edge of G colored either red or green, and leaving C with either one or two elements.
at algorithm termination, d is the diameter of G, measured in edges.
Given a vertex v in V, the maximum-length simple paths in G starting at v are exactly those that contain contain all vertices of the center, terminate at a leaf, and traverse only green edges between center and the far endpoint. These go from v, across the center, to one of the leaves farthest from the center.
Now consider your algorithm, which might be more practical, in light of the above. Starting from any vertex v, there is exactly one simple path p from that vertex, ending at a center vertex, and containing all vertices of the center (because G is a tree, and if there are two vertices in C then they share an edge). It can be shown that the maximal simple paths in G having v as one endpoint all have the form of the union of p with a simple path from center to leaf traversing only green edges.
The key point for our purposes is that the incoming edge of the other endpoint is necessarily green. Therefore, when we perform a search for the longest paths starting there, we have access to those traversing only green edges from leaf across (all vertices of) the center to another leaf. Those are exactly the maximal-length simple paths in G, so we can be confident that the second search will indeed reveal the graph diameter.
1:procedureTreeDiameter(T)
2:pick an arbitrary vertex v where v∈V
3:u = BFS ( T, v )
4:t = BFS ( T, u )
5:return distance ( u, t )
Result: Complexity = O(|V|)

Counting the sum of every nodes' neighbors' degrees?

For each node u in an undirected graph, let twodegree[u] be the sum of the degrees of u's neighbors. Show how to compute the entire array of twodegree[.] values in linear time, given a graph in adjacency list format.
This is the solution
for all u ∈ V :
degree[u] = 0
for all (u; w) ∈ E:
degree[u] = degree[u] + 1
for all u ∈ V :
twodegree[u] = 0
for all (u; w) ∈ E:
twodegree[u] = twodegree[u] + degree[w]
can someone explain what degree[u] does in this case and how twodegree[u] = twodegree[u] + degree[w] is supposed to be the sum of the degrees of u's neighbors?
Here, degree[u] is the degree of the node u (that is, the number of nodes adjacent to it). You can see this computed by the first loop, which iterates over all edges in the graph and increments degree[u] for each edge in the graph.
The second loop then iterates over every node in the graph and computes the sum of all its neighbors' degrees. It uses the fact that degree[u] is precomputed in order to run in O(m + n) time.
Hope this helps!
In addition to what #templatetypedef has said, the statement twodegree[u] = twodegree[u] + degree[w] simply keeps track of the twodegree of u while it keeps iteratively (or cumulatively) adding the degrees of its neighbors (that are temporarily stored in w)

Resources