How can it be solved using Dijkstra’s algorithm? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Suppose you are given a graph where each edge represents the path cost and each vertex has also a cost which represents that, if you select a path using this node, the cost will be added with the path cost. How can it be solved using Dijkstra’s algorithm?

First solution: duplicate the nodes to add an "internal" edge
Replace the non-oriented edges with oriented edges, and duplicate every node N into one "incoming" node Ni and one "outgoing" node No so that going through a node N in the original graph is equivalent to going through both nodes Ni and No in the new graph, and thus through the extra edge from Ni to No.
A B Ao Ai
\ / \ /
N -----------> Bo - Ni - No - Bi in the new graph, every edge is oriented
| / \ all the edges on this drawing are oriented left-to-right
C Co Ci
Make sure edges between non-twin nodes are always oriented out-->in, and edges between twin nodes are always oriented in-->out. For instance there is an edge from Ni to No (twin nodes), and an edge from Bo to Ni (non twin nodes).
Second solution: add one half of the code of the node to all its edges
Add the cost of the source node to the cost of every one of its edges;
Add the cost of the target node to the cost of every one of its edges;
Add half of the cost of every other node to the cost of its edges.
Now you can check than in any path S-A-B-C-T, the costs of S and T are added once edge, and the half-costs of A, B and C are added twice.

The cost that Dijkstra's algorithm assigns to each vertex is the cost to reach that vertex. You just need to incorporate the vertex costs into that cost calculation.
If only edges have costs, then the cost to reach a vertex is the cost to reach the previous vertex plus the edge cost.
If the vertices also have costs, then the cost to reach a vertex is the cost to reach previous vertex, plus the cost of the edge and the new vertex.
Otherwise, it's exactly the same algorithm.

Related

How to approach coding for node disjoint path [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to solve the problem of node/vertices disjoint paths in a directed graph and came to know about the idea of splitting nodes into in and out nodes respectively. I got the idea and how it works and all the related theorem's like menger theorem but still, I'm not sure how to code it in an efficient manner.
Which data structure should I use so that I can split the vertices and still manage to balance the time complexity? Is there any algorithm existing which tells how to approach the code.
Please help or suggest some appropriate link which may help me out.
Thanks
It's quite simple actually. Let's say you have graph as an edge list of pairs u v meaning theres an edge from u to v
If nodes are not integers already use a dictionary/hash/map to reduce them to integers in range 1..n where n is number of nodes.
Now we "split" all nodes, for each node i it will become 2 nodes i and i+n. where i is considered in-node and i+n out-node.
Now graph edges are modified, for every edge u --> v we instead store edge u+n --> v
Also we add edges from each nodes in-node to out-node, i.e. from node i to i+n
We can assign infinity capacities to all edges and capacities of 1 to edges that connect in-node to out-node
Now Node-disjoint paths from some node s to t can be found using any max-flow algorithm (Ford-Fulkerson, Edmonds-Karp, Dinic, etc)
pesudo-code for building residual network:
n = #nodes
for each node i in 1..n:
residual_graph.addEdge(i, i+n, capacity=1);
residual_graph.addEdge(i+n, i, capacity=0);
for each edge (u,v) in graph
residual_graph.addEdge(u+n, v, capacity=+Infinity);
residual_graph.addEdge(v, u+n, capacity=0);

approximation ratio of maximum independent set? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
suppose we have a weighted grid graph and our problem is to find maximum independent set. There is a greedy algorithm that each time choose the heaviest node and remove it and its neighbors until all nodes of G have been chosen or removed. we want to prove that W(s) >= 1/4 W(T) where S is our greedy result and T is the OPT solution.
let S be the result of our greedy algorithm and T be an arbitrary independent set which can be the OPT. We know that for any T for any node v which belongs to T-S there exist a node v' in S which is neighbor of v and w(v) <= w(v').
Is there any idea to how prove that?
Just use your last statement, and consider T as the maximum independent set and you have this two results :
for every node in T-S like v there exist a node like u in S were W(v) <= W(u).
each node like u in S is at most neighbor of 4 nodes in T.
now use them :)
The desired result can be obtained with the following proof.
Let S be the set generated by the greedy algorithm, let T be an independent set of maximal weight. We will stepwise transform T into S and bound the loss for each transformation step.
Choose v in T\S with maximal weight. By the statement included in the question above, there exists v' in S such that w(v') >= w(v); choose such a v'. Let N be the neighbourhood of v' in T; N contains v and at most 4 vertices (as we have a grid graph). As v was chosen with maximal weight and w(v')>=w(v), we obtain w(v')>=w(N)/4. We set T':=(T\N) and add v' to it. By construction, T' remains an independent set and we have w(T') >= w(T) - (3/4)w(N).
In total, for each exchange step, vertices from T\S get eliminated, but a node from S is added such that the added total weight is at least one quarter of the lost total weight.
Furthermore, the constructed sets N in each step are disjoint, which means that in each step, at least one quarter of w(N) is preserved. In total, as we have constructed S, andS has weight at least (1/4)w(T).
Note that the input graph is not required to be a grid graph but maximum degree 4 is sufficient; furthermore, the proof can be generalized by permitting an abitrary graph, replacing 4 by the maximum degree Δ yielding an approximation ratio of 1/Δ.

Maximum number of edges in unconnected graph [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
In an undirected graph with n vertices and no edges, what is the maximum number of edges that can be added so that the graph remains unconnected? This is an interview question.
NC2
(N-1)C2
N!
(N-1)!
The maximum number of edges in a graph with N vertices is NC2 (link).
Note that, to remain unconnected, one of the vertices should not have any edges. More formally, there has to be a cut (across which there won't be any edges) with one side having only one vertex. Why not more than one vertex? Proof by induction:
The cases for 0, 1 and 2 vertices are trivial.
Consider a graph with 3 vertices. The best cut will be one with 2 vertices on one side and 1 vertex on the other side.
Now assume the best cut is one with N-1 vertices on one side and 1 vertex on the other with N >= 3. Now try to add a vertex. Adding the vertex to the side with one vertex will result in one edge that can be added. Adding the vertex to the other side will result in N-1 possible edges. Clearly N-1 > 1 for N >= 3. Thus it's always better to add the vertex to the side with N-1 vertices.
Now there are two ways to go from here:
Consider the graph without one of the edges. The maximum number of edges of this sub-graph is (N-1)C2.
Consider the maximum number of edges of the graph as is and subtract the number of edges from one vertex. This gives NC2 - (N-1) = N(N-1)/2 - 2(N-1)/2 = (N-2)(N-1)/2 = (N-1)C2.
So the answer is (N-1)C2, i.e. option 2.
b (n-1)C2
An example of such a graph is a complete graph of n-1 vertices and one isolated vertex.
In this example, the complement graph would have nC2 - (n-1)C2 = n-1 edges.
And either given graph or its complement is connected (proof).
Hence, if we constucted a graph with more than (n-1)C2 edges, then the complement would have less than n-1 edges and couldn't be connected, so our graph would be.

Finding all vertices on negative cycles [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I know that the problem of checking whether given edge of a weighted digraph belongs to a negative cycle is NP-complete (Finding the minimal subgraph that contains all negative cycles) and Bellman-Ford allows to check a vertex for the same thing in O(|V|*|E|) time. But what if I want to find all vertices belonging to negative cycles? I wonder if it could be done faster than Floyd-Warshall's O(|V|^3).
I don't think Floyd-Warshall does the job of finding these vertices. Using a similar approach as taken in the post you're referring to, it can be shown that finding the set of all vertices that lie on a negative cycle is NP-complete as well.
The related post shows that one may use the algorithm to find the set of all edges that lie on a negative cycle to solve the hamiltonian cycle problem, which means that the former problem is NP-complete.
If we can reduce the problem of finding all edges that lie on a negative cycle to the problem of finding the set of all vertices that lie on a negative cycle, we've shown NP-completeness of the latter problem.
For each edge (u,w) in your weighted digraph, introduce a new auxiliary vertex v, and split (u, w) in two edges (u, v) and (v, w). The weight of (u, w) can be assigned to either (u, v) or (v, w).
Now apply the magic polynomial-time algorithm to find all the vertices that lie on a negative cycle, and take the subset that consists of the auxiliary vertices. Since each auxiliary vertex is associated with an edge, we've solved the problem of finding the minimal subgraph that contains all negative cycles, and we can thus also solve the hamiltonian cycle problem in polynomial time, which implies P = NP. Assuming P != NP, finding all vertices that lie on a negative cycle is NP-complete.

How to find mother vertex in a directed graph in O(n+m)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
A mother vertex in a directed graph G = (V,E) is a vertex v such that all other
vertices G can be reached by a directed path from v
Give an O(n+m) algorithm to test whether graph G contains a mother vertex.
(c) from Skiena manual
Found only O(n(n+m)) way
Algorithm::
a) Do DFS/BFS of the graph and keep track of the last finished vertex 'x' .
b) If there exist any mother vertex, then 'x' is one of them. Check if 'x' is a mother vertex by doing DFS/BFS from vertex 'x'.
Time Complexity O(n+m) + O(n+m) = O(n+m)
step1. Do topological sorting of vertices of directed graph.
step2. Now check whether we can reach all vertices from first vertex of topologically sorted vertices in step 1.
To perform a step 2, again initialize
array discovered[i] to false and do dfs startin from first node of topologically sorted vertices.
If all vertices can be reached, then graph has mother vertex, and mother vertex will be the former of topologically sorted vertices.
time complexity:
step1 takes O(n + m), step 2 takes O(n + m)
so total O(n+m) + O(n+m) = O(n+m)
I saw the solution. I dont think we need to find SCC. Just do a DFS from a random vertex and then do the DFS from the vertex with last finish time. If there is a mother vertex then it has to be this.
Do topological sorting on the graph. Example : A C D E B
Find if there exists a path from the first node in the topological order to all other nodes
2.a. Initialize the distance from A to all the nodes as infinite and distance from A to A as 0.
2.b. For all the nodes in topological order, Update shortest distance for all adjacent nodes from A.
Loop over all the nodes to see if there is still some infinite distance. If there is an infinite distance, there's no path from A to that node and return false.
If loop over all the nodes is successful, return true.
we can find the mother vertex in O(m+n) using KOSARAJU's algorithm
First find the DFS or BFS from any vertex and track the visited vertex and push it in STACK.
Top element is mother vertex if it can visit the all vertex , so apply again DFS or BFS.
See this link for DFS using recursion and stack to track the visiting time of all vertices
so start from stack top and visit till the stack is not empty . If there is a single vertex that is not visited then there doesn't exis a mother vertex.
Here is the algorithm fo finding the mother vertex in a Graph , G = (V.E) :
Do DFS traversal of the given graph. While doing traversal keep track of last finished vertex ‘v’. This step takes O(V+E) time.
If there exist mother vertex/vertices, then 'v' must be one (or one of them). Check if v is a mother vertex by doing DFS/BFS from v. This step also takes O(V+E) time.

Resources