Today I read on Introduction to Algorithms of the longest path problem, which asked in a weighted, directed graph what is the longest simple path passing two vertices. The author used a wonderful example showing that dynamic programming fails for the longest path problem because there does not exist a nice optimal structure that always comes with an optimal substructure. It was commented that this problem is actually NP-complete. So it must be really hard.
Now here is my question: Instead of assign every edge a positive weight k>0, what if we simply assign negative weights to each edge with weight-k? Then each "longest path" would automatically be the shortest path, and if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path. Hence using a quite common trick we "can" turn the longest path problem into the shortest path problem.
Can someone point out my mistake in the reasoning? I figure something must be wrong but do not really know what it is. It is extremely unlikely my argument is correct, for obvious reasons. Reading the pseudo code provided in the book, it seems the algorithm for shortest path does not prohibit using negative weights, and after all everything can be "elevated" by adding a large enough constant to make them positive. I am a beginner in algorithms so this question might be trivial to experts.
The mistake in your reasoning is this line:
if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path.
If you have a graph where all edges have negative weight (which can happen in the transformation you're describing) and there's a negative cycle, then there is no shortest path between any two nodes on that cycle, since any path between them can have its cost reduced by following the cycle more and more times. Since there is no shortest path in that case, your reasoning breaks down.
Now, you can argue that you should instead look for the shortest simple path between the nodes (that is, a path with no duplicated edges). Unfortunately, though, that problem is also NP-hard, so this reduction doesn't actually buy you anything.
Hope this helps!
if there is no loops in the shortest path by definition, there should
not be any loops in the corresponding longest path
...and so that particular original problem is solvable in polynomial time. And if the particular original problem is a DAG then this method solves it in linear time.
The complexity statement doesn't say that all graphs are that hard to solve, just that some are.
From your own link:
In contrast to the shortest path problem, which can be solved in
polynomial time in graphs without negative-weight cycles, the longest
path problem is NP-hard, meaning that it cannot be solved in
polynomial time for arbitrary graphs unless P = NP.
[...]
For most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.
Related
If I am right, the problem of single source longest path for any general graph is NP-hard.
Can we negate all the edge weights of a DAG and run Dijstra's or Bellman Ford algorithm to get single source longest path?
If yes, Dijstra's can't run on a graph with negative edge weights, then how come it gives us a result will all negative edge weights?
Yes, you are right. Your proposed solution can be used to find a longest path in a DAG.
However, notice that the longest path problem for a DAG is not NP-hard. It is NP-hard for the general case where the graph might not be a DAG. There is some interesting, relevant discussion on Wikipedia's article on the Longest path problem.
As for your second question, Dijkstra's algorithm is defined for non-negative edge weights. The implementation might give you a result anyway, but it is no longer guaranteed to be correct.
We all know that none of Bellman-Ford and Floyd-Warshall algorithms can't handle negative weight cycles, but detect them only. Is there any graph algorithm which could, or would still give correct result in presence of negative cycles?
The network simplex method may do what you want, though it's notably more complicated. https://www.cs.upc.edu/~erodri/webpage/cps/theory/lp/network/slides.pdf
Methods exist for eliminating negative cost cycles though I can't think of their names at the moment.
There is no algorithm that can find the shortest path in a graph that contains a (reachable) negative cycle, because there is no shortest path: you can loop around in the cycle an unlimited number of times, each time making your path shorter (more negative).
You could modify the problem to say that you want the shortest nonintersecting path. This problem is solvable, but it is NP-hard, so all currently known solutions run in exponential time or worse. That is because any instance of this problem can be transformed to an instance of the longest path problem by negating all the edge weights, and that problem is known to be NP-hard.
I am trying to do c++ program.I am trying to do problem in which i have numbers of points. Now i need to find the path that goes through all the points. This is not actually TSP because as per my knowledge in TSP it is possible to travel from all points to every other points. But in my case the path network between the points is fixed and i just need to find the suitable path that goes through all the points provided that all points may not have connection to every other point..so what algorithm am i supposed to follow.
It seems you are looking for a way to traverse a graph? If so have you tried Breadth first search http://en.wikipedia.org/wiki/Breadth-first_search or Depth first search http://en.wikipedia.org/wiki/Depth-first_search to traverse your graph.
You want to find a Hamiltonian path for a graph.
In the mathematical field of graph theory, a Hamiltonian path (or
traceable path) is a path in an undirected graph that visits each
vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a
Hamiltonian path that is a cycle. Determining whether such paths and
cycles exist in graphs is the Hamiltonian path problem, which is
NP-complete.
Some techniques that exist :
There are n! different sequences of vertices that might be Hamiltonian
paths in a given n-vertex graph (and are, in a complete graph), so a
brute force search algorithm that tests all possible sequences would
be very slow. There are several faster approaches. A search procedure
by Frank Rubin divides the edges of the graph into three classes:
those that must be in the path, those that cannot be in the path, and
undecided. As the search proceeds, a set of decision rules classifies
the undecided edges, and determines whether to halt or continue the
search. The algorithm divides the graph into components that can be
solved separately. Also, a dynamic programming algorithm of Bellman,
Held, and Karp can be used to solve the problem in time O(n2 2n). In
this method, one determines, for each set S of vertices and each
vertex v in S, whether there is a path that covers exactly the
vertices in S and ends at v. For each choice of S and v, a path exists
for (S,v) if and only if v has a neighbor w such that a path exists
for (S − v,w), which can be looked up from already-computed
information in the dynamic program.
Andreas Björklund provided an alternative approach using the
inclusion–exclusion principle to reduce the problem of counting the
number of Hamiltonian cycles to a simpler counting problem, of
counting cycle covers, which can be solved by computing certain matrix
determinants. Using this method, he showed how to solve the
Hamiltonian cycle problem in arbitrary n-vertex graphs by a Monte
Carlo algorithm in time O(1.657n); for bipartite graphs this algorithm
can be further improved to time O(1.414n).
For graphs of maximum degree three, a careful backtracking search can
find a Hamiltonian cycle (if one exists) in time O(1.251n).
Ok, I posted this question because of this exercise:
Can we modify Dijkstra’s algorithm to solve the single-source longest path problem by changing minimum to maximum? If so, then prove your algorithm correct. If not, then provide a counterexample.
For this exercise or all things related to Dijkstra's algorithm, I assume there are no negative weights in the graph. Otherwise, it makes not much sense, as even for shortest path problem, Dijkstra can't work properly if negative edge exists.
Ok, my intuition answered it for me:
Yes, I think it can be modified.
I just
initialise distance array to MININT
change distance[w] > distance[v]+weight to distance[w] < distance[v]+weight
Then I did some research to verify my answer. I found this post:
Longest path between from a source to certain nodes in a DAG
First I thought my answer was wrong because of the post above. But I found that maybe the answer in the post above is wrong. It mixed up The Single-Source Longest Path Problem with The Longest Path Problem.
Also in wiki of Bellman–Ford algorithm, it said correctly :
The Bellman–Ford algorithm computes single-source shortest paths in a weighted digraph. For graphs with only non-negative edge weights, the faster Dijkstra's algorithm also solves the problem. Thus, Bellman–Ford is used primarily for graphs with negative edge weights.
So I think my answer is correct, right?
Dijkstra can really be The Single-Source Longest Path Problem and my modifications are also correct, right?
No, we cannot1 - or at the very least, no polynomial reduction/modification is known - longest path problem is NP-Hard, while dijkstra runs in polynomial time!
If we can find a modfication to dijsktra to answer longest-path problem in polynomial time, we can derive P=NP
If not, then provide a counterexample.
This is very bad task. The counter example can provide a specific modification is wrong, while there could be a different modification that is OK.
The truth is we do not know if longest-path problem is solveable in polynomial time or not, but the general assumption is - it is not.
regarding just changing the relaxation step:
A
/ \
1 2
/ \
B<--1---C
edges are (A,B),(A,C),(C,B)
dijkstra from A will first pick B, and then B is never reachable - because it is out of the set of distances.
At the very least, one will have also to change min heap into max heap, but it will have a different counter-example why it fails.
(1) probably, maybe if P=NP it is possible, but it is very unlikely.
Yes, we can. And your answer is almost correct. Except one thing.
You assume no negative weights. In this case Dijkstra's algorithm cannot find longest path.
But if you assume only negative weights, you can easily find the longest path. To prove correctness, just take absolute values of all weights and you get exactly the usual Dijkstra's algorithm with positive weights.
It works in directed acyclic graphs but not in cyclic graphs. Since the path will back track and there is no way of avoiding that in dijkstra's algo
The only difference I could think of for the question is that in the Travelling Salesman Problem (TSP) I need to find a
minimum permutation of all the vertices in the graph and in Shortest Paths problem there is no need to consider all the vertices we can search the states space for minimum path length routes can anyone suggest more differences.
You've already called out the essential difference: the TSP is to find a path that contains a permutation of every node in the graph, while in the shortest path problem, any given shortest path may, and often does, contain a proper subset of the nodes in the graph.
Other differences include:
The TSP solution requires its answer to be a cycle.
The TSP solution will necessarily repeat a node in its path, while a shortest path will not (unless one is looking for shortest path from a node to itself).
TSP is an NP-complete problem and shortest path is known polynomial-time.
If you are looking for a precise statement of the difference I would say you just need to replace your idea of the "permuation" with the more technical and precise term "simple cycle visiting every node in the graph", or better, "Hamilton cycle":
The TSP requires one to find the simple cycle covering every node in the graph with the smallest weight (alternatively, the Hamilton cycle with the least weight). The Shortest Path problem requires one to find the path between two given nodes with the smallest weight. Shortest paths need not be Hamiltonian, nor do they need to be cycles.
With the shortest path problem you consider paths between two nodes. With the TSP you consider paths between all node. This makes the latter much more difficult.
Consider two paths between nodes A and B. One over D the other one of C. Let the one over C be the longer path. In the Shortest Path problem this path can get immediately discarded. In the TSP it is perfectly possible that this path is part of the over all solution, because you'll have to visit C and visiting it later might be even more expensive.
Therefor you can't break down the TSP in similar but smaller sub-problems.
Shortest path is just have source and target.we need to find shortest path between them obviously output must be tree in polynomial-time.
Finding Shortest Path:-
Undirected graphs:
Dijkstra's algorithm with list O(V^2)
Directed graphs with arbitrary weights without negative cycles:
Bellman–Ford algorithm Time complexity O(VE)
Floyd–Warshall's Algorithm is used to find the shortest paths between between all pairs
TSP (Travelling-Salesman Problem) is not like that we have cover every node from source and finally we've reach source at minimum cost.Eventually there must be cycle. TSP is an NP-complete problem
Ref:
https://en.wikipedia.org/wiki/Shortest_path_problem
https://en.wikipedia.org/wiki/Travelling_salesman_problem
http://www.geeksforgeeks.org/travelling-salesman-problem-set-1/
http://www.geeksforgeeks.org/greedy-algorithms-set-6-dijkstras-shortest-path-algorithm/
https://www.hackerearth.com/practice/algorithms/graphs/shortest-path-algorithms/tutorial/
In TSP, you need to both visit all nodes and also return to your starting point. This complicates the problem immensely.