I had an exam yesterday and I would like to check if I was answering correctly on one of the questions.
The question:
G = (V, E, w) is a directed, simple graph (V: set of vertices, E: set of edges, w: non-negative weight function). There is a non-empty subset of G denoted E(red).
A path p in G will be called n-red if there are n red edges on p. d_red(u, v) will be the lightest path from vertex u to vertex v that is at least 1-red. If all paths from u to v are 0-red, d_red(u, v) = Infinity.
The weight of a path p is the sum of all edges that are part of p.
Input:
G = (V, E, w)
s, t that are elements of V
f_red: E -> { true, false }
f_red(red edge) = true
f_red(non-red edge) = false
Output:
d_red(s, t) (the lightest path that includes at least one red edge).
Runtime Constraint: O(V log V + E)
In a few words, my solution was to use Dijkstra's algorithm. A Boolean variable that is initially false is used to keep track of whether at least one red edge has been encountered. This is checked for every iteration with f_red and the variable is set to true if f_red(current edge) = true. If the variable is still false at the end, return d_red(u, v) = Infinity.
What do you think about that?
Related
I need to find a 'dynamic - programming' kind of solution for the following problem:
Input:
Perfect Binary-Tree, T = (V,E) - (each node has exactly 2 children except the leafs).
V = V(blue) ∪ V(black).
V(blue) ∩ V(black) = ∅.
(In other words, some vertices in the tree are blue)
Root of the tree 'r'.
integer k
A legal Solution:
A subset of vertices V' ⊆ V which is a vertex cover of T, and |V' ∩ V(blue)| = k. (In other words, the cover V' contains k blue vertices)
Solution Value:
The value of a legal solution V' is the number of vertices in the set = |V'|.
For convenience, we will define the value of an "un-legal" solution to be ∞.
What we need to find:
A solution with minimal Value.
(In other words, The best solution is a solution which is a cover, contains exactly k blue vertices and the number of vertices in the set is minimal.)
I need to define a typical sub-problem. (Like, if i know what is the value solution of a sub tree I can use it to find my value solution to the problem.)
and suggest a formula to solve it.
To me, looks like you are on the right track!
Still, I think you will have to use an additional parameter to tell us how far is any picked vertex from the current subtree's root.
For example, it can be just the indication whether we pick the current vertex, as below.
Let fun (v, b, p) be the optimal size for subtree with root v such that, in this subtree, we pick exactly b blue vertices, and p = 1 if we pick vertex v or p = 0 if we don't.
The answer is the minimum of fun (r, k, 0) and fun (r, k, 1): we want the answer for the full tree (v = r), with exactly k vertices covered in blue (b = k), and we can either pick or not pick the root.
Now, how do we calculate this?
For the leaves, fun (v, 0, 0) is 0 and fun (v, t, 1) is 1, where t tells us whether vertex v is blue (1 if yes, 0 if no).
All other combinations are invalid, and we can simulate it by saying the respective values are positive infinities: for example, for a leaf vertex v, the value fun (v, 3, 1) = +infinity.
In the implementation, the infinity can be just any value greater than any possible answer.
For all internal vertices, let v be the current vertex and u and w be its children.
We have two options: to pick or not to pick the vertex v.
Suppose we pick it.
Then the value we get for f (v, b, 1) is 1 (the picked vertex v) plus the minimum of fun (u, x, q) + fun (w, y, r) such that x + y is either b if the vertex v is black or b - 1 if it is blue, and q and r can be arbitrary: if we picked the vertex v, the edges v--u and v--w are already covered by our vertex cover.
Now let us not pick the vertex v.
Then the value we get for f (v, b, 0) is just the minimum of fun (u, x, 1) + fun (w, y, 1) such that x + y = b: if we did not pick the vertex v, the edges v--u and v--w have to be covered by u and w.
There is a directed graph G = [V ; E] with edge weights w(u, v) for (u, v) ∈ E.
Suppose the values for {d[v], π[v]}; v ∈ V and claims
that these are the length of the shortest path and the predecessor node in
it for v ∈ V , how could I verify if this statement is true or false that does not solve the entire shortest path problem from scratch? This is an problem I met with not many ideas in my head ..
The problem is a bit unclear, but to clarify:
There's a node s in your graph, and that for each vertex v:
for v != s, pi[v] is intended to be a node adjacent to v that's on a shortest path from v to s.
d[v] is intended to store the shortest distance from v to s.
The problem is to verify, given a pi, d, that they legitimately contain back-edges and minimal distances.
An easily implemented condition that verifies this is as follows:
For each vertex v
Either:
v = s and d[v] = 0
Or:
d[pi[v]] = d[v] - 1
d[u] >= d[v] - 1 for each u adjacent to v
pi[v] is adjacent to v
This check runs in O(V + E) time.
I am given a G=(V,E) directed graph, and all of its edges have weight of either "0" or "1".
I'm given a vertex named "A" in the graph, and for each v in V, i need to find the weight of the path from A to v which has the lowest weight in time O(V+E).
I have to use only BFS or DFS (although this is probably a BFS problem).
I though about making a new graph where vertices that have an edge of 0 between them are united and then run BFS on it, but that would ruin the graph direction (this would work if the graph was undirected or the weights were {2,1} and for an edge of 2 i would create a new vertex).
I would appreciate any help.
Thanks
I think it can be done with a combination of DFS and BFS.
In the original BFS for an unweighted graph, we have the invariant that the distance of nodes unexplored have a greater or equal distance to those nodes explored.
In our BFS, for each node we first do DFS through all 0 weighted edges, mark down the distance, and mark it as explored. Then we can continue the other nodes in our BFS.
Array Seen[] = false
Empty queue Q
E' = {(a, b) | (a, b) = 0 and (a, b) is of E}
DFS(V, E', u)
for each v is adjacent to u in E' // (u, v) has an edge weighted 0
if Seen[v] = false
v.dist = u.dist
DFS(V, E', v)
Seen[u] = true
Enqueue(Q, u)
BFS(V, E, source)
Enqueue(Q, source)
source.dist = 0
DFS(V, E', source)
while (Q is not empty)
u = Dequeue(Q)
for each v is adjacent to u in E
if Seen[v] = false
v.dist = u.dist + 1
Enqueue(Q, v)
Seen[u] = true
After running the BFS, it can give you all shortest distance from the node source. If you only want a shortest distance to a single node, simply terminate when you see the destination node. And yes, it meets the requirement of O(V+E) time complexity.
This problem can be modified to the problem of Single Source Shortest Path.
You just need to reverse all the edge directions and find the minimum distance of each vertex v from the vertex A.
It could be easily observed that if in the initial graph if we had a minimal path from some vertex v to A, after changing the edge directions we would have the same minimal path from A to v.
This could be simply done either by Dijkstra OR as the edges just have two values {0 and 1}, it could also be done by modified BFS (first go to vertexes with distance 0, then 1, then 2 and so on.).
I am having a tough time understanding Tarjan's lowest common ancestor algorithm. Can somebody explain it with an example?
I am stuck after the DFS search, what exactly does the algorithm do?
My explanation will be based on the wikipedia link posted above :).
I assumed that you already know about the union disjoint structure using in the algorithm.
(If not please read about it, you can find it in "Introduction to Algorithm").
The basic idea is every times the algorithm visit a node x, the ancestor of all its descendants will be that node x.
So to find a Least common ancestor (LCA) r of two nodes (u,v), there will be two cases:
Node u is a child of node v (vice versa), this case is obvious.
Node u is ith branch and v is the jth branch (i < j) of node r, so after visit node u, the algorithm backtrack to node r, which is the ancestor of the two nodes, mark the ancestor of node u as r and go to visit node v.
At the moment it visit node v, as u is already marked as visited (black), so the answer will be r. Hope you get it!
I will explain using the code from CP-Algorithms:
void dfs(int v)
{
visited[v] = true;
ancestor[v] = v;
for (int u : adj[v]) {
if (!visited[u]) {
dfs(u);
union_sets(v, u);
ancestor[find_set(v)] = v;
}
}
for (int other_node : queries[v]) {
if (visited[other_node])
cout << "LCA of " << v << " and " << other_node
<< " is " << ancestor[find_set(other_node)] << ".\n";
}
}
Let's outline a proof of the algorithm.
Lemma 1: For each vertex v and its parent p, after we visit v from p and union v with p, p and all vertices in the subtree of root v (i.e. p and all descendents of v, including v) will be in one disjoint set represented by p (i.e. ancester[root of the disjoint set] is p).
Proof: Suppose the tree has height h. Then proceed by induction in vertex height, starting from the leaf nodes.
Lemma 2: For each vertex v, right before we mark it as visited, the following statements are true:
Each v's parents pi will be in a disjoint set that contains precisely pi and all vertices in the subtrees of pi that pi has already finished visiting.
Every visited vertex so far is in one of these disjoint sets.
Proof: We proceed by induction. The statement is vacuously true for the root (the only vertex with height 0) as it has no parent. Now suppose the statement holds for every vertex of height k for k ≥ 0, and suppose v is a vertex of height k + 1. Let p be v's parent. Before p visits v, suppose it has already visited its children c1, c2, ..., cn. By Lemma 1, p and all vertices in the subtrees of root c1, c2, ..., cn are in one disjoint set represented by p. Furthermore, all newly visited vertices after we visited p are the vertices in this disjoint set. Since p is of height k, we can use the induction hypothesis to conclude that v indeed satisfies 1 and 2.
We are now ready to prove the algorithm.
Claim: For each query (u,v), the algorithm outputs the lowest common ancester of u and v.
Proof: Without loss of generality suppose we visit u before we visit v in the DFS. Then either v is a descendent of u or not.
If v is a descedent of u, by Lemma 1 we know that u and v are in one disjoint set that is represented by u, which means ancestor[find_set(v)] is u, the correct answer.
If v is not a descendent of u, then by Lemma 2 we know that u must be in one of the disjoint sets, each of them represented by a parent of v at the time we mark v. Let p be the representing vertex of the disjoint set u is in. By Lemma 2 we know p is a parent of v, and u is in a visited subtree of p and therefore a descendent of p. These are not changed after we have visited all v's children, so p is indeed a common ancestor of u and v. To see p is the lowest common ancestor, suppose q is the child of p of which v is a descendent (i.e. if we travel back to root from v, q is the last parent before we reach p; q can be v). Suppose for contradiction that u is also a descendent of q. Then by Lemma 2 u is in both the disjoint set represented by p and the disjoint set represented by q, so this disjoint set contains two v's parents, a contradiction.
Let G = (V, E) be a weighted, connected and undirected graph. Let T1 and T2 be 2 different MST's. Suppose we can write E = (A1 U B U A2) such that:
B is the intersection of the edges of T1 and T2, and
A1 = T1 - B
A2 = T2 - B
Assuming that every MST T in G contains all the edges of B, find an algorithm that decides whether there is a MST T that contains at least one edge in A1 and at least one edge in A2.
Edit: I've dropped the part that was here. I think that it does more harm than good.
you should sort your edge that the red edge is prefer to blue edge for choose.then you can use any MST algorithm same as Prim's algorithm :
If a graph is empty then we are done immediately. Thus, we assume
otherwise. The algorithm starts with a tree consisting of a single
vertex, and continuously increases its size one edge at a time, until
it spans all vertices. Input: A non-empty connected weighted graph
with vertices V and edges E (the weights can be negative). Initialize:
Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew
= {} Repeat until Vnew = V: Choose an edge {u, v} with minimal weight such that u is in Vnew and v is not (if there are multiple edges with
the same weight, any of them may be picked) Add v to Vnew, and {u, v}
to Enew Output: Vnew and Enew describe a minimal spanning tree