Can I use Dijkstra's algorithm for longest path in directed cyclic graph? - algorithm

I have weights from 0 - 10. 0 is the shortest, 10 is longest.
Can I traverse the numbers, 10-x and use Dijkstra for shortest path?

In general finding longest path is NP.
In contrast to the shortest path problem, which can be solved in
polynomial time in graphs without negative-weight cycles, the longest
path problem is NP-hard, meaning that it cannot be solved in
polynomial time for arbitrary graphs unless P = NP.
https://en.wikipedia.org/wiki/Longest_path_problem

No. A cyclic graph will have paths of infinite length, and Dijkstra's marking of visited nodes will not allow you to find long paths.
Generally, finding the longest path kind of feels like a very hard problem.

Related

Question about single source longest path in a DAG

If I am right, the problem of single source longest path for any general graph is NP-hard.
Can we negate all the edge weights of a DAG and run Dijstra's or Bellman Ford algorithm to get single source longest path?
If yes, Dijstra's can't run on a graph with negative edge weights, then how come it gives us a result will all negative edge weights?
Yes, you are right. Your proposed solution can be used to find a longest path in a DAG.
However, notice that the longest path problem for a DAG is not NP-hard. It is NP-hard for the general case where the graph might not be a DAG. There is some interesting, relevant discussion on Wikipedia's article on the Longest path problem.
As for your second question, Dijkstra's algorithm is defined for non-negative edge weights. The implementation might give you a result anyway, but it is no longer guaranteed to be correct.

Minimum weight edge in all paths of a directed graph

Given a directed graph with edges having -ve or +ve weights, what is the algorithms to find the smallest weight edges of all paths from vertex s to vertes d?
From Wikipedia
You are describing the Single Source shortest path problem. Which can be solved using Dijkstra's if the edges are only positive or Bellman-Ford if the edges are allowed to be negative as well.
The most important algorithms for solving this problem are:
Dijkstra's algorithm solves the single-source shortest path problem.
Bellman–Ford algorithm solves the single-source problem if edge weights may be negative.
A* search algorithm solves for single pair shortest path using heuristics to try to speed up the search.
Floyd–Warshall algorithm solves all pairs shortest paths.
Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs.
Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node.
I would find all reachable from s, e.g. by depth first search. Then find all nodes that can reach d (equivalently, all nodes reachable from d in the graph with directions reversed). Now you want the smallest weight edges that start in the first set and end in the second set.

Why cannot we turn longest path into shortest graph?

Today I read on Introduction to Algorithms of the longest path problem, which asked in a weighted, directed graph what is the longest simple path passing two vertices. The author used a wonderful example showing that dynamic programming fails for the longest path problem because there does not exist a nice optimal structure that always comes with an optimal substructure. It was commented that this problem is actually NP-complete. So it must be really hard.
Now here is my question: Instead of assign every edge a positive weight k>0, what if we simply assign negative weights to each edge with weight-k? Then each "longest path" would automatically be the shortest path, and if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path. Hence using a quite common trick we "can" turn the longest path problem into the shortest path problem.
Can someone point out my mistake in the reasoning? I figure something must be wrong but do not really know what it is. It is extremely unlikely my argument is correct, for obvious reasons. Reading the pseudo code provided in the book, it seems the algorithm for shortest path does not prohibit using negative weights, and after all everything can be "elevated" by adding a large enough constant to make them positive. I am a beginner in algorithms so this question might be trivial to experts.
The mistake in your reasoning is this line:
if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path.
If you have a graph where all edges have negative weight (which can happen in the transformation you're describing) and there's a negative cycle, then there is no shortest path between any two nodes on that cycle, since any path between them can have its cost reduced by following the cycle more and more times. Since there is no shortest path in that case, your reasoning breaks down.
Now, you can argue that you should instead look for the shortest simple path between the nodes (that is, a path with no duplicated edges). Unfortunately, though, that problem is also NP-hard, so this reduction doesn't actually buy you anything.
Hope this helps!
if there is no loops in the shortest path by definition, there should
not be any loops in the corresponding longest path
...and so that particular original problem is solvable in polynomial time. And if the particular original problem is a DAG then this method solves it in linear time.
The complexity statement doesn't say that all graphs are that hard to solve, just that some are.
From your own link:
In contrast to the shortest path problem, which can be solved in
polynomial time in graphs without negative-weight cycles, the longest
path problem is NP-hard, meaning that it cannot be solved in
polynomial time for arbitrary graphs unless P = NP.
[...]
For most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.

Can I use Dijkstra algorithm for negative weighted graph?

I know Bellman Ford algorithm works well with negative weighted Graph, But I've developed a code of Dijkstra Algorithm that works very well. But it fails when I insert negative weighted edges. Any Solution?
I think we can't do that, as Dijkstra algorithm will not find a final way to reach at destination vertex because it may get stuck in loops, it is not made for negative weighted graphs, you should go for bellman ford algorithm
Actually, it is possible in a special case. Dijkstra algorithm will fail, if there is an edge of a negative length. However, if all edges are of negative length, then you may inverse the lengths of all edges and use the algorithm to find the longest path between two vertices of the graph (the path will represent the shortest path in the original graph).
But in a general case, as you have stated, it is not possible to use Dijkstra. If there is no cycle of negative length, than you shall use Bellman-Ford algorithm. If you cannot guarantee that there no cycle of negative length, than the problem is NP-complete and there is no known polynomial algorithm.
As you stated, Bellman Ford is the algorithm of choice for finding the shortest path in a graph with negative weights. The issue with using Dijkstra's Algorithm in this scenario is that Dijkstra's assumes that all possible subpaths from s to t in a graph must be smaller than the goal subpath, which is not necessarily true when negative edge weights are added.
If all edge weights are positive, this guarantees that adding more edges to a path makes it longer. Knowing this, Dijkstra's algorithm will discard any paths that are longer than the shortest one it has found to some vertex since there is no chance of the long path becoming shorter than the short path.
However, this assumption is not true if there are negative edge weights since we could have an extremely long path P that we discarded, but somewhere down the road, P can pass through a very negative edge and become shorter than the shortest path you currently have. Therefore, we can't guarantee that a path we've found is the shortest at any stage in Dijkstra's algorithm, which the algorithm relies on.

What is the difference between Travelling Salesman and finding Shortest Path?

The only difference I could think of for the question is that in the Travelling Salesman Problem (TSP) I need to find a
minimum permutation of all the vertices in the graph and in Shortest Paths problem there is no need to consider all the vertices we can search the states space for minimum path length routes can anyone suggest more differences.
You've already called out the essential difference: the TSP is to find a path that contains a permutation of every node in the graph, while in the shortest path problem, any given shortest path may, and often does, contain a proper subset of the nodes in the graph.
Other differences include:
The TSP solution requires its answer to be a cycle.
The TSP solution will necessarily repeat a node in its path, while a shortest path will not (unless one is looking for shortest path from a node to itself).
TSP is an NP-complete problem and shortest path is known polynomial-time.
If you are looking for a precise statement of the difference I would say you just need to replace your idea of the "permuation" with the more technical and precise term "simple cycle visiting every node in the graph", or better, "Hamilton cycle":
The TSP requires one to find the simple cycle covering every node in the graph with the smallest weight (alternatively, the Hamilton cycle with the least weight). The Shortest Path problem requires one to find the path between two given nodes with the smallest weight. Shortest paths need not be Hamiltonian, nor do they need to be cycles.
With the shortest path problem you consider paths between two nodes. With the TSP you consider paths between all node. This makes the latter much more difficult.
Consider two paths between nodes A and B. One over D the other one of C. Let the one over C be the longer path. In the Shortest Path problem this path can get immediately discarded. In the TSP it is perfectly possible that this path is part of the over all solution, because you'll have to visit C and visiting it later might be even more expensive.
Therefor you can't break down the TSP in similar but smaller sub-problems.
Shortest path is just have source and target.we need to find shortest path between them obviously output must be tree in polynomial-time.
Finding Shortest Path:-
Undirected graphs:
Dijkstra's algorithm with list O(V^2)
Directed graphs with arbitrary weights without negative cycles:
Bellman–Ford algorithm Time complexity O(VE)
Floyd–Warshall's Algorithm is used to find the shortest paths between between all pairs
TSP (Travelling-Salesman Problem) is not like that we have cover every node from source and finally we've reach source at minimum cost.Eventually there must be cycle. TSP is an NP-complete problem
Ref:
https://en.wikipedia.org/wiki/Shortest_path_problem
https://en.wikipedia.org/wiki/Travelling_salesman_problem
http://www.geeksforgeeks.org/travelling-salesman-problem-set-1/
http://www.geeksforgeeks.org/greedy-algorithms-set-6-dijkstras-shortest-path-algorithm/
https://www.hackerearth.com/practice/algorithms/graphs/shortest-path-algorithms/tutorial/
In TSP, you need to both visit all nodes and also return to your starting point. This complicates the problem immensely.

Resources