Problem: Find the 'shortest cycle in undirected weighted graph which contains every node'. All weights are positive. A node may be visited more than once, the is distinguishes the problem from a Hamiltonian cycle (TSP).
A naïve attempt might be to use the minimum spanning tree (MST) and backtrack to get to the starting node. This results in a length of 2*MST but is not the minimum cycle.
Example: Consider a complete graph with vertices 1,2,3,4 and edge costs c12=c13=c14=1 and c23=c24=c34=100. TSP distance = 202 (1 -> 2 -> 3 -> 4 -> 1). Shortest cycle distance = 6 (1 -> 2 -> 1 -> 3 -> 1 -> 4 -> 1)
Edit: I am looking for an algorithm to find the shortest cycle.
In the Wikipedia page on TSP, it mentions a special case called "metric TSP", in which distances satisfy the triangle inequality.
In metric TSP, it does not matter whether or not you can visit the same city twice, because doing so is never necessary. You can always easily remove all the repeated visits without making your path any longer.
Every instance of "metric TSP" is, therefore, an instance of your problem. Metric TSP is still NP-hard, so your problem is too.
Related
I have an undirected and connected (not complete) graph of vertices, where u and v can be any 2 distinct vertices. I want to construct the minimum weight circuit that starts from a vertex u, passes through v, then returns back to u without repeating any edges. Can this be done by doing the following?
Finding the shortest path from u to v - call this p1
Removing all constituent edges of p1 from the graph
Finding the new shortest path from v to u - call this p2
Returning all deleted edges to the graph, and concatenating p1 and p2 together - call this c1
Is c1 the minimum weight circuit that can be constructed, considering the constraint of passing through both u and v? If so, how can I prove it, if not, why not?
It seems to make sense to me, as all the paths contained within c1 are also shortest paths themselves, however I can't quite shake the feeling that I might be missing something.
EDIT: I have changed "fully connected graph" to "connected graph". "fully" implied that the graph is complete, which is not what I meant.
Here is a counterexample:
In a case of complete graph, assume all edges not shown in the picture to have some large weight (like 1000).
The p1 would find the shortest path 1 - 1 - 1 and exclude it, prohibiting the actual answer of 3 - 1 + 1 - 3.
As suggested by Falk Hüffner, this task can be solved by Edge disjoint shortest pair algorithm.
If I understand correctly, this is equivalent to finding two edge-disjoint paths from u to v while minimizing their total weight. This is a special case of Minimum Cost Flow with a flow of two, so it can be solved in polynomial time and is likely not NP-hard. Suurballe's algorithm solves it for directed graphs, and it should be possible to adapt it to undirected graphs (it seems that this Wikipedia page is attempting to do this).
Given a weighted undirected graph, I need to find the shortest path between two nodes, a classical shortest path problem actually. But there is one more constraint : Each node contains a "reduction" value that can be used to reduce the cost of the following edges for one traversal(not only adjacent, and reduction are not cumulative). So you can reduce the cost of an edge using the "Reduction" that was in one of the node you went throught before (the final cost for each edge can't be less than 0).
Note that once we went throught a node with a reduction, we can use it again for all the following edges (not just adjacent, and it is available an unlimited amount of time). Reduction doesn't accumulate.
Let's consider this graph :
in this graph the shortest path from node 1 to 5 is :
1 -> 4 for a cost of 13 (15-2)
4 -> 3 for a cost of 0 (12-12)
3 -> 5 for a cost of 0 (10-12) In this case, we reuse the reduction of node 4 because it is bigger than the reduction of node 3 (We went throught the node n°4 then we have an unlimited amount of reduction of cost 12). It is 0 and not -2 because the weight of an edge can't be negative.
Then the total cost from node 1 to node 5 is 13 + 0 + 0 = 13
To solve this problem, I've tried to use the classical Dijkstra/Bellman-Ford but it didn't work, can you help me with this ?
It seems to be this can be solved with a variation of Bellman-Ford.
Every path up to a given node can be summarised as a pair (C, D) where C is the cost of that path (after discounts) and D is the best discount factor available on that path. Since a discount can be reused an unlimited number of times once that node has been visited, it makes sense to always use the biggest discount seen so far on that path. For example, the path (1 -> 4 -> 3) has cost C = 13 and discount D = 12.
The complication over the undiscounted problem is that we cannot tell from the cost what the "best" path is to nodes in between the source and destination. In your example the path (1 -> 2 -> 3) has lower cost than (1 -> 4 -> 3), but the latter has a better discount which is why the best path from 1 to 5 is (1 -> 4 -> 3 -> 5).
Rather than recording the lowest cost path to each node (in Bellman-Ford algorithm), we need to record all "feasible" paths from the source to that node found so far. A path can be said to be feasible if there is no other known path that has both a lower cost and a better discount. After the algorithm terminates we can take from all feasible paths from source to destination the one with the smallest cost, ignoring the discount.
(Edit: I originally suggested Djikstra could be used but I think that not be so straightforward. I'm not sure we can choose "the closest" unvisited node in any meaningful way such that we are guaranteed to find the minimal path. Someone else might see how to make it work though...)
I think you can use a Dijkstra-type algorithm. Dijkstra's algorithm can be thought of computing the minimum spanning tree that contains the shortest paths from a source vertex to all other vertices. Let's call this the "Dijkstra tree" that contains all the shortest paths from a given source vertex.
Dijkstra keeps adding new vertices to the current tree. For the next vertex he chooses the one that is closest to the current tree. See this animation from wikipedia:
So when Dijkstra adds a new vertex v to an already inserted vertex u (of the current tree), the edge weight of {u, v} has to be considered. In your case, the costs are not just the edge weight of {u, v}, but the weight reduced by the sum of vertex-reductions along the shortest path to u in the current tree. So you have to remember the sum of the vertex reductions along the paths of this "Dijkstra" tree in the vertices.
Given: complete directed weighted graph. All weights are positive. Is there any simple way (heuristics?) to find the shortest (in terms of weights) path that visits all vertices? Number of vertices is around 25.
This problem seems to be close to Asymmetric Travelling Salesman, but I don't require this path to be the cycle.
I would recommend K-shortest path approach:
http://www.mathworks.com/matlabcentral/fileexchange/32513-k-shortest-path-yen-s-algorithm
I think this is your best bet... Here is an example: lets say you have 8 nodes and need to get from node 1 to node 8 in the shortest path possible. Let's assume all nodes are connected to each other (i.e., node 1 is connected to node 2:8, and so on). You will have to generate the "cost matrix" based on your problem.
costMatrix = rand(8);
[shortestPath, cost] = dijkstra(costMatrix,1,8);
% Alternatively, return the 10 shortest paths from point 1 to 8:
[shortestPaths, costs] = kShortestPath(costMatrix,1,8,10);
Position (i,j) in the cost matrix is the cost of traveling from node i to node j. If costMatrix(i,j) = inf; there is no connection between node i and node j.
After looking into hidden markov models, I found a number of potential problems - the emission matrix would be difficult to define and you can run into problems where you would enter a "state" two or more times during a single "route." With the k-shortest path, this error doesn't occur.
I have a connected, non-directed, graph with N nodes and 2N-3 edges. You can consider the graph as it is built onto an existing initial graph, which has 3 nodes and 3 edges. Every node added onto the graph and has 2 connections with the existing nodes in the graph. When all nodes are added to the graph (N-3 nodes added in total), the final graph is constructed.
Originally I'm asked, what is the maximum number of nodes in this graph that can be visited exactly once (except for the initial node), i.e., what is the maximum number of nodes contained in the largest Hamiltonian path of the given graph? (Okay, saying largest Hamiltonian path is not a valid phrase, but considering the question's nature, I need to find a max. number of nodes that are visited once and the trip ends at the initial node. I thought it can be considered as a sub-graph which is Hamiltonian, and consists max. number of nodes, thus largest possible Hamiltonian path).
Since i'm not asked to find a path, I should check if a hamiltonian path exists for given number of nodes first. I know that planar graphs and cycle graphs (Cn) are hamiltonian graphs (I also know Ore's theorem for Hamiltonian graphs, but the graph I will be working on will not be a dense graph with a great probability, thus making Ore's theorem pretty much useless in my case). Therefore I need to find an algorithm for checking if the graph is cycle graph, i.e. does there exist a cycle which contains all the nodes of the given graph.
Since DFS is used for detecting cycles, I thought some minor manipulation to the DFS can help me detect what I am looking for, as in keeping track of explored nodes, and finally checking if the last node visited has a connection to the initial node. Unfortunately
I could not succeed with that approach.
Another approach I tried was excluding a node, and then try to reach to its adjacent node starting from its other adjacent node. That algorithm may not give correct results according to the chosen adjacent nodes.
I'm pretty much stuck here. Can you help me think of another algorithm to tell me if the graph is a cycle graph?
Edit
I realized by the help of the comment (thank you for it n.m.):
A cycle graph consists of a single cycle and has N edges and N vertices. If there exist a cycle which contains all the nodes of the given graph, that's a Hamiltonian cycle. – n.m.
that I am actually searching for a Hamiltonian path, which I did not intend to do so:)
On a second thought, I think checking the Hamiltonian property of the graph while building it will be more efficient, which is I'm also looking for: time efficiency.
After some thinking, I thought whatever the number of nodes will be, the graph seems to be Hamiltonian due to node addition criteria. The problem is I can't be sure and I can't prove it. Does adding nodes in that fashion, i.e. adding new nodes with 2 edges which connect the added node to the existing nodes, alter the Hamiltonian property of the graph? If it doesn't alter the Hamiltonian property, how so? If it does alter, again, how so? Thanks.
EDIT #2
I, again, realized that building the graph the way I described might alter the Hamiltonian property. Consider an input given as follows:
1 3
2 3
1 5
1 3
these input says that 4th node is connected to node 1 and node 3, 5th to node 2 and node 3 . . .
4th and 7th node are connected to the same nodes, thus lowering the maximum number of nodes that can be visited exactly once, by 1. If i detect these collisions (NOT including an input such as 3 3, which is an example that you suggested since the problem states that the newly added edges are connected to 2 other nodes) and lower the maximum number of nodes, starting from N, I believe I can get the right result.
See, I do not choose the connections, they are given to me and I have to find the max. number of nodes.
I think counting the same connections while building the graph and subtracting the number of same connections from N will give the right result? Can you confirm this or is there a flaw with this algorithm?
What we have in this problem is a connected, non-directed graph with N nodes and 2N-3 edges. Consider the graph given below,
A
/ \
B _ C
( )
D
The Graph does not have a Hamiltonian Cycle. But the Graph is constructed conforming to your rules of adding nodes. So searching for a Hamiltonian Cycle may not give you the solution. More over even if it is possible Hamiltonian Cycle detection is an NP-Complete problem with O(2N) complexity. So the approach may not be ideal.
What I suggest is to use a modified version of Floyd's Cycle Finding algorithm (Also called the Tortoise and Hare Algorithm).
The modified algorithm is,
1. Initialize a List CYC_LIST to ∅.
2. Add the root node to the list CYC_LIST and set it as unvisited.
3. Call the function Floyd() twice with the unvisited node in the list CYC_LIST for each of the two edges. Mark the node as visited.
4. Add all the previously unvisited vertices traversed by the Tortoise pointer to the list CYC_LIST.
5. Repeat steps 3 and 4 until no more unvisited nodes remains in the list.
6. If the list CYC_LIST contains N nodes, then the Graph contains a Cycle involving all the nodes.
The algorithm calls Floyd's Cycle Finding Algorithm a maximum of 2N times. Floyd's Cycle Finding algorithm takes a linear time ( O(N) ). So the complexity of the modied algorithm is O(N2) which is much better than the exponential time taken by the Hamiltonian Cycle based approach.
One possible problem with this approach is that it will detect closed paths along with cycles unless stricter checking criteria are implemented.
Reply to Edit #2
Consider the Graph given below,
A------------\
/ \ \
B _ C \
|\ /| \
| D | F
\ / /
\ / /
E------------/
According to your algorithm this graph does not have a cycle containing all the nodes.
But there is a cycle in this graph containing all the nodes.
A-B-D-C-E-F-A
So I think there is some flaw with your approach. But suppose if your algorithm is correct, it is far better than my approach. Since mine takes O(n2) time, where as yours takes just O(n).
To add some clarification to this thread: finding a Hamiltonian Cycle is NP-complete, which implies that finding a longest cycle is also NP-complete because if we can find a longest cycle in any graph, we can find the Hamiltonian cycle of the subgraph induced by the vertices that lie on that cycle. (See also for example this paper regarding the longest cycle problem)
We can't use Dirac's criterion here: Dirac only tells us minimum degree >= n/2 -> Hamiltonian Cycle, that is the implication in the opposite direction of what we would need. The other way around is definitely wrong: take a cycle over n vertices, every vertex in it has exactly degree 2, no matter the size of the circle, but it has (is) an HC. What you can tell from Dirac is that no Hamiltonian Cycle -> minimum degree < n/2, which is of no use here since we don't know whether our graph has an HC or not, so we can't use the implication (nevertheless every graph we construct according to what OP described will have a vertex of degree 2, namely the last vertex added to the graph, so for arbitrary n, we have minimum degree 2).
The problem is that you can construct both graphs of arbitrary size that have an HC and graphs of arbitrary size that do not have an HC. For the first part: if the original triangle is A,B,C and the vertices added are numbered 1 to k, then connect the 1st added vertex to A and C and the k+1-th vertex to A and the k-th vertex for all k >= 1. The cycle is A,B,C,1,2,...,k,A. For the second part, connect both vertices 1 and 2 to A and B; that graph does not have an HC.
What is also important to note is that the property of having an HC can change from one vertex to the other during construction. You can both create and destroy the HC property when you add a vertex, so you would have to check for it every time you add a vertex. A simple example: take the graph after the 1st vertex was added, and add a second vertex along with edges to the same two vertices of the triangle that the 1st vertex was connected to. This constructs from a graph with an HC a graph without an HC. The other way around: add now a 3rd vertex and connect it to 1 and 2; this builds from a graph without an HC a graph with an HC.
Storing the last known HC during construction doesn't really help you because it may change completely. You could have an HC after the 20th vertex was added, then not have one for k in [21,2000], and have one again for the 2001st vertex added. Most likely the HC you had on 23 vertices will not help you a lot.
If you want to figure out how to solve this problem efficiently, you'll have to find criteria that work for all your graphs that can be checked for efficiently. Otherwise, your problem doesn't appear to me to be simpler than the Hamiltonian Cycle problem is in the general case, so you might be able to adjust one of the algorithms used for that problem to your variant of it.
Below I have added three extra nodes (3,4,5) in the original graph and it does seem like I can keep adding new nodes indefinitely while keeping the property of Hamiltonian cycle. For the below graph the cycle would be 0-1-3-5-4-2-0
1---3---5
/ \ / \ /
0---2---4
As there were no extra restrictions about how you can add a new node with two edges, I think by construction you can have a graph that holds the property of hamiltonian cycle.
Suppose we are given a weighted graph G(V,E).
The graph contains N vertices (Numbered from 0 to N-1) and M Bidirectional edges .
Each edge(vi,vj) has postive distance d (ie the distance between the two vertex vivj is d)
There is atmost one edge between any two vertex and also there is no self loop (ie.no edge connect a vertex to
itself.)
Also we are given S the source vertex and D the destination vertex.
let Q be the number of queries,each queries contains one edge e(x,y).
For each query,We have to find the shortest path from the source S to Destination D, assuming that edge (x,y) is absent in original graph.
If no any path exists from S to D ,then we have to print No.
Constraints are high 0<=(N,Q,M)<=25000
How to solve this problem efficiently?
Till now what i did is implemented the simple Dijakstra algorithm.
For each Query Q ,everytime i am assigning (x,y) to Infinity
and finding Dijakstra shortest path.
But this approach will be very slow as overall complexity will be Q(time complexity of Dijastra Shortes path)*
Example::
N=6,M=9
S=0 ,D=5
(u,v,cost(u,v))
0 2 4
3 5 8
3 4 1
2 3 1
0 1 1
4 5 1
2 4 5
1 2 1
1 3 3
Total Queries =6
Query edge=(0,1) Answer=7
Query edge=(0,2) Answer=5
Query edge=(1,3) Answer=5
Query edge=(2,3) Answer=6
Query edge=(4,5) Answer=11
Query edge=(3,4) Answer=8
First, compute the shortest path tree from source node to destination.
Second, loop over all the queries and cut the shortest path at the edge specified by the query; this defines a min-cut problem, where you have the distance between the source node and the frontier of one partition and the frontier of the another and the destination; you can compute this problem very easily, at most O(|E|).
Thus, this algorithm requires O(Q|E| + |V|log|V|), asymptotically faster than the naïve solution when |V|log|V| > |E|.
This solution reuses Dijkstra's computation, but still processes each query individually, so perhaps there are room to improvements by exploiting the work did in a previous query in successive queries by observing the shape of the cut induced by the edge.
For each query the graph changes only very slightly, so you can reuse a lot of your computation.
I suggest the following approach:
Compute the shortest path from S to all other nodes (Dijkstras Algorithm does that for you already). This will give you a shortest path tree T.
For each query, take this tree, pruned by the edge (x,y) from the query. This might be the original tree (if (x,y) was no where on the tree) or a smaller tree T'.
If D is in the T', you can take the original shortest path
Otherwise start Dijkstra, but use the labels you already have from the T' (these paths are already smallest) as permanent labels.
If you run the Dijkstra in step 2 you can reuse the pruned of part of tree T in the following way: Every time you want to mark a node permanent (which is one of the nodes not in T') you may attach the entire subtree of this node (from the original tree T) to your new shortest path tree and label all its nodes permanent.
This way you reuse as much information as possible from the first shortest path run.
In your example this would mean:
Compute shortest path tree:
0->1->2->3->4->5
(in this case a very simple)
Now assume we get query (1,2).
We prune edge (1,2) leaving us with
0->1
From there we start Dijkstra getting 2 and 3 as next permanent marked nodes.
We connect 1 to 2 and 1 to 3 in the new shortest path tree and attach the old subtree from 3:
2<-0->1->3->4->5
So we got the shortest path with just running one additional step of Dijkstras Algorithm.
The correctness of the algorithm follows from all paths in tree T being at most as long as in the new Graph from the Query (where every shortest path can only be longer). Therefore we can reuse every path from the tree that is still feasible (i.e. where no edge was removed).
If performance matters a lot, you can improve on the Dijkstra performance through a lot of implementation tricks. A good entry point for this might be the DIMACS Shortest Path Implementation Challenge.
One simple optimization: first run Dijkstra on complete graph (with no edges removed).
Then, for each query - check if the requested edge belongs to that shortest path. If not - removing this edge won't make any difference.