I have the following problem and I'm not quite sure how to solve it:
Given a graph G = (V;E) in which every edge e has a positive integer cost c_e and a starting vertex s\in V . Design an O(V + E) algorithm that marks all vertices reachable from s using a path (not necessarily a simple path) with the total cost of that path being multiples of 5.
How can I keep track of the total amount of cost of the path that I've already visited? I've been studying about BFS in undirected weighted graphs and made some attempts on using it here, but most of the BFS references focus on finding the shortest path (and not something like keep it multiple of 5).
What do you think about the next algorithm?
Let's consider new directed graph based on the source graph. For every vertex v from the source graph create 5 new vertexes v[0], v[1], ..., v[4] in the new graph corresponding to the modules from the division by 5. Then, if vertexes v and u were connected in the source graph by the edge with the weight w, add edge between v[i] and u[(i + w) % 5], u[j] and v[(j + w) % 5] in the new graph, where i = 0..4, j = 0..4. Then run BFS from the v[0], where v is the starting vertex in the source graph.
Consider vertexes with the index 0 like v[0]. Each of them corresponding to the path of the length multiple of 5 to the vertex v in the source graph. All of such vertexes marked after BFS as reachable from the starting vertex form the answer. Total complexity is linear.
Related
Let G(V,E) be a directed weighted graph with edge lengths, where all of the edge lengths are positive except two of the edges have negative lengths. Given a fixed vertex s, give an algorithm computing shortest paths from s to any other vertex in O(e + v log(v)) time.
My work:
I am thinking about using the reweighting technique of Johnson's algorithm. And then, run Belford Algo once and apply Dijkstra v times. This will give me the time complexity as O(v^2 log v + ve).
This is the standard all pair shortest problem, As I only need one vertex (s) - my time complexity will be O(v log v + e) right?
For this kind of problem, changing the graph is often a lot easier than changing the algorithm. Let's call the two negative-weight edges N1 and N2; a path by definition cannot use the same edge more than once, so there are four kinds of path:
A. Those which use neither N1 nor N2,
B. Those which use N1 but not N2,
C. Those which use N2 but not N1,
D. Those which use both N1 and N2.
So we can construct a new graph with four copies of each node from the original graph, such that for each node u in the original graph, (u, A), (u, B), (u, C) and (u, D) are nodes in the new graph. The edges in the new graph are as follows:
For each positive weight edge u-v in the original graph, there are four copies of this edge in the new graph, (u, A)-(v, A) ... (u, D)-(v, D). Each edge in the new graph has the same weight as the corresponding edge in the original graph.
For the first negative-weight edge (N1), there are two copies of this edge in the new graph; one from layer A to layer B, and one from layer C to layer D. These new edges have weight 0.
For the second negative-weight edge (N2), there are two copies of this edge in the new graph; one from layer A to layer C, and one from layer B to layer D. These new edges have weight 0.
Now we can run any standard single-source shortest-path problem, e.g. Dijkstra's algorithm, just once on the new graph. The shortest path from the source to a node u in the original graph will be one of the following four paths in the new graph, whichever corresponds to a path of the lowest weight in the original graph:
(source, A) to (u, A) with the same weight.
(source, A) to (u, B) with the weight in the new graph minus the weight of N1.
(source, A) to (u, C) with the weight in the new graph minus the weight of N2.
(source, A) to (u, D) with the weight in the new graph minus the weights of N1 and N2.
Since the new graph has 4V vertices and 4E - 2 edges, the worst-case performance of Dijkstra's algorithm is O((4E - 2) + 4V log 4V), which simplifies to O(E + V log V) as required.
To ensure that a shortest path in the new graph corresponds to a genuine path in the original graph, it remains to be proved that a path from e.g. (source, A) to (u, B) will not use two copies of the same edge from the original graph. That is quite easy to show, but I'll leave it to you as something to think about.
I am working on constructing an algorithm to compute G^2 of a directed graph that is a form of an adjacency list, where G^2 = (V,E'), where E' is defined as (u,v)∈E′ if there is a path of length 2 between u and v in G. I understand the question very well and have found an algorithm which I assume is correct, however the runtime of my algorithm is O(VE^2) where V is the number of vertices and E is the number of Edges of the graph. I was wondering how I could do this in O(VE) time in order to make it more efficient?
Here is the algorithm, I came up with:
for vertex in Vertices
for neighbor in Neighbors
for n in Neighbors
if(n!=neighbor)
then-> if(n.value==neighbor)
add this to a new adjacency list
break; // this means we have found a path of size 2 between vertex and neighbor
continue otherwise
The problem can be solved in time O(VE) using BFS(breadth first search). The thing about BFS, is that it traverses the graph level by level. Meaning that first it traverses all the vertices at a distance of 1 from the source vertex. Then it traverses all the vertices at a distance of 2 from the source vertex and so on. So we can take advantage of this fact and terminate our BFS, when we have reached vertices at a distance of 2.
Following is the pseudocode:
For each vertex v in V
{
Do a BFS with v as source vertex
{
For all vertices u at distance of 2 from v
add u to adjacency list of v
and terminate BFS
}
}
Since BFS takes time O(V + E) and we invoke this for every vertex, so total time is O(V(V + E)) = O(V^2 + VE) = O(VE) .Just remember to start with fresh data structures for every BFS traversal.
I am studying algorithms, and I have seen an exercise like this
I can overcome this problem with exponential time but. I don't know how to prove this linear time O(E+V)
I will appreciate any help.
Let G be the graph where the minimum spanning tree T is embedded; let A and B be the two trees remaining after (u,v) is removed from T.
Premise P: Select minimum weight edge (x,y) from G - (u,v) that reconnects A and B. Then T' = A + B + (x,y) is a MST of G - (u,v).
Proof of P: It's obvious that T' is a tree. Suppose it were not minimum. Then there would be a MST - call it M - of smaller weight. And either M contains (x,y), or it doesn't.
If M contains (x,y), then it must have the form A' + B' + (x,y) where A' and B' are minimum weight trees that span the same vertices as A and B. These can't have weight smaller than A and B, otherwise T would not have been an MST. So M is not smaller than T' after all, a contradiction; M can't exist.
If M does not contain (x,y), then there is some other path P from x to y in M. One or more edges of P pass from a vertex in A to another in B. Call such an edge c. Now, c has weight at least that of (x,y), else we would have picked it instead of (x,y) to form T'. Note P+(x,y) is a cycle. Consequently, M - c + (x,y) is also a spanning tree. If c were of greater weight than (x,y) then this new tree would have smaller weight than M. This contradicts the assumption that M is a MST. Again M can't exist.
Since in either case, M can't exist, T' must be a MST. QED
Algorithm
Traverse A and color all its vertices Red. Similarly label B's vertices Blue. Now traverse the edge list of G - (u,v) to find a minimum weight edge connecting a Red vertex with a Blue. The new MST is this edge plus A and B.
When you remove one of the edges then the MST breaks into two parts, lets call them a and b, so what you can do is iterate over all vertices from the part a and look for all adjacent edges, if any of the edges forms a link between the part a and part b you have found the new MST.
Pseudocode :
for(all vertices in part a){
u = current vertex;
for(all adjacent edges of u){
v = adjacent vertex of u for the current edge
if(u and v belong to different part of the MST) found new MST;
}
}
Complexity is O(V + E)
Note : You can keep a simple array to check if vertex is in part a of the MST or part b.
Also note that in order to get the O(V + E) complexity, you need to have an adjacency list representation of the graph.
Let's say you have graph G' after removing the edge. G' consists have two connected components.
Let each node in the graph have a componentID. Set the componentID for all the nodes based on which component they belong to. This can be done with a simple BFS for example on G'. This is an O(V) operation as G' only has V nodes and V-2 edges.
Once all the nodes have been flagged, iterate over all unused edges and find the one with the least weight that connects the two components (componentIDs of the two nodes will be different). This is an O(E) operation.
Thus the total runtime is O(V+E).
Love some guidance on this problem:
G is a directed acyclic graph. You want to move from vertex c to vertex z. Some edges reduce your profit and some increase your profit. How do you get from c to z while maximizing your profit. What is the time complexity?
Thanks!
The problem has an optimal substructure. To find the longest path from vertex c to vertex z, we first need to find the longest path from c to all the predecessors of z. Each problem of these is another smaller subproblem (longest path from c to a specific predecessor).
Lets denote the predecessors of z as u1,u2,...,uk and dist[z] to be the longest path from c to z then dist[z]=max(dist[ui]+w(ui,z))..
Here is an illustration with 3 predecessors omitting the edge set weights:
So to find the longest path to z we first need to find the longest path to its predecessors and take the maximum over (their values plus their edges weights to z).
This requires whenever we visit a vertex u, all of u's predecessors must have been analyzed and computed.
So the question is: for any vertex u, how to make sure that once we set dist[u], dist[u] will never be changed later on? Put it in another way: how to make sure that we have considered all paths from c to u before considering any edge originating at u?
Since the graph is acyclic, we can guarantee this condition by finding a topological sort over the graph. topological sort is like a chain of vertices where all edges point left to right. So if we are at vertex vi then we have considered all paths leading to vi and have the final value of dist[vi].
The time complexity: topological sort takes O(V+E). In the worst case where z is a leaf and all other vertices point to it, we will visit all the graph edges which gives O(V+E).
Let f(u) be the maximum profit you can get going from c to u in your DAG. Then you want to compute f(z). This can be easily computed in linear time using dynamic programming/topological sorting.
Initialize f(u) = -infinity for every u other than c, and f(c) = 0. Then, proceed computing the values of f in some topological order of your DAG. Thus, as the order is topological, for every incoming edge of the node being computed, the other endpoints are calculated, so just pick the maximum possible value for this node, i.e. f(u) = max(f(v) + cost(v, u)) for each incoming edge (v, u).
Its better to use Topological Sorting instead of Bellman Ford since its DAG.
Source:- http://www.utdallas.edu/~sizheng/CS4349.d/l-notes.d/L17.pdf
EDIT:-
G is a DAG with negative edges.
Some edges reduce your profit and some increase your profit
Edges - increase profit - positive value
Edges - decrease profit -
negative value
After TS, for each vertex U in TS order - relax each outgoing edge.
dist[] = {-INF, -INF, ….}
dist[c] = 0 // source
for every vertex u in topological order
if (u == z) break; // dest vertex
for every adjacent vertex v of u
if (dist[v] < (dist[u] + weight(u, v))) // < for longest path = max profit
dist[v] = dist[u] + weight(u, v)
ans = dist[z];
We are given a directed graph G (possibly with cycles) with positive edge weights, and the minimum distance D[v] to every vertex v from a source s is also given (D is an array this way).
The problem is to find the array N[v] = number of paths of length D[v] from s to v,
in linear time.
Now this is a homework problem that I've been struggling with for quite long. I was working along the following thought : I'm trying to remove the cycles by suitably choosing an acyclic subgraph of G, and then try to find shortest paths from s to v in the subgraph.
But I cannot figure out explicitly what to do, so I'd appreciate any help, as in a qualitative idea on what to do.
You can use dynamic programming approach in here, and fill up the number of paths as you go, if D[u] + w(u,v) = D[v], something like:
N = [0,...,0]
N[s] = 1 //empty path
For each vertex v, in *ascending* order of `D[v]`:
for each edge (u,v) such that D[u] < D[v]:
if D[u] + w(u,v) = D[v]: //just found new shortest paths, using (u,v)!
N[v] += N[u]
Complexity is O(VlogV + E), assuming the graph is not sparsed, O(E) is dominanting.
Explanation:
If there is a shortest path v0->v1->...->v_(k-1)->v_k from v0 to v_k, then v0->...->v_(k-1) is a shortest path from v0 to v_k-1, thus - when iterating v_k - N[v_(k-1)] was already computed fully (remember, all edges have positive weights, and D[V_k-1] < D[v_k], and we are iterating by increasing value of D[v]).
Therefor, the path v0->...->v_(k-1) is counted in the number N[V_(k-1)] at this point.
Since v0->...->v_(k-1)-v_k is a shortest path - it means D[v_(k-1)] + w(v_k-1,v_k) = D[v_k] - thus the condition will hold, and we will add the count of this path to N[v_k].
Note that the proof for this algorithm will basically be induction that will follow the guidelines from this explanation more formally.