Ok, I posted this question because of this exercise:
Can we modify Dijkstra’s algorithm to solve the single-source longest path problem by changing minimum to maximum? If so, then prove your algorithm correct. If not, then provide a counterexample.
For this exercise or all things related to Dijkstra's algorithm, I assume there are no negative weights in the graph. Otherwise, it makes not much sense, as even for shortest path problem, Dijkstra can't work properly if negative edge exists.
Ok, my intuition answered it for me:
Yes, I think it can be modified.
I just
initialise distance array to MININT
change distance[w] > distance[v]+weight to distance[w] < distance[v]+weight
Then I did some research to verify my answer. I found this post:
Longest path between from a source to certain nodes in a DAG
First I thought my answer was wrong because of the post above. But I found that maybe the answer in the post above is wrong. It mixed up The Single-Source Longest Path Problem with The Longest Path Problem.
Also in wiki of Bellman–Ford algorithm, it said correctly :
The Bellman–Ford algorithm computes single-source shortest paths in a weighted digraph. For graphs with only non-negative edge weights, the faster Dijkstra's algorithm also solves the problem. Thus, Bellman–Ford is used primarily for graphs with negative edge weights.
So I think my answer is correct, right?
Dijkstra can really be The Single-Source Longest Path Problem and my modifications are also correct, right?
No, we cannot1 - or at the very least, no polynomial reduction/modification is known - longest path problem is NP-Hard, while dijkstra runs in polynomial time!
If we can find a modfication to dijsktra to answer longest-path problem in polynomial time, we can derive P=NP
If not, then provide a counterexample.
This is very bad task. The counter example can provide a specific modification is wrong, while there could be a different modification that is OK.
The truth is we do not know if longest-path problem is solveable in polynomial time or not, but the general assumption is - it is not.
regarding just changing the relaxation step:
A
/ \
1 2
/ \
B<--1---C
edges are (A,B),(A,C),(C,B)
dijkstra from A will first pick B, and then B is never reachable - because it is out of the set of distances.
At the very least, one will have also to change min heap into max heap, but it will have a different counter-example why it fails.
(1) probably, maybe if P=NP it is possible, but it is very unlikely.
Yes, we can. And your answer is almost correct. Except one thing.
You assume no negative weights. In this case Dijkstra's algorithm cannot find longest path.
But if you assume only negative weights, you can easily find the longest path. To prove correctness, just take absolute values of all weights and you get exactly the usual Dijkstra's algorithm with positive weights.
It works in directed acyclic graphs but not in cyclic graphs. Since the path will back track and there is no way of avoiding that in dijkstra's algo
Related
If I am right, the problem of single source longest path for any general graph is NP-hard.
Can we negate all the edge weights of a DAG and run Dijstra's or Bellman Ford algorithm to get single source longest path?
If yes, Dijstra's can't run on a graph with negative edge weights, then how come it gives us a result will all negative edge weights?
Yes, you are right. Your proposed solution can be used to find a longest path in a DAG.
However, notice that the longest path problem for a DAG is not NP-hard. It is NP-hard for the general case where the graph might not be a DAG. There is some interesting, relevant discussion on Wikipedia's article on the Longest path problem.
As for your second question, Dijkstra's algorithm is defined for non-negative edge weights. The implementation might give you a result anyway, but it is no longer guaranteed to be correct.
I am aware of the fact that I have to apply Dijkstra's algorithm to get an answer.The entire algorithm is explained in depth in one of the answers .
However why do we need to apply Dijkstra's algorithm to this problem.According to my knowledge Dijkstra will find the shortest distance path.
But the problem setter has clearly asked for minimum cost path.Considering this should'nt we apply Prim's algorithm to the question and find the MST for the entire chess board.
Here is the link to the problem.
Dijkstra's algorithm is indeed for finding the shortest distance path. However, note that "distance" does not have to mean distance measured in the normal way (i.e. with a ruler).
In fact Dijkstra's algorithm will also work for finding the shortest cost path in any network (providing that all the costs are greater than or equal to zero). All you need to do is to define the distance between any two nodes to be equal to the cost of the corresponding edge.
So, in this problem, when they search for the shortest path, they are defining distance in terms of the cost function defined in the problem.
Today I read on Introduction to Algorithms of the longest path problem, which asked in a weighted, directed graph what is the longest simple path passing two vertices. The author used a wonderful example showing that dynamic programming fails for the longest path problem because there does not exist a nice optimal structure that always comes with an optimal substructure. It was commented that this problem is actually NP-complete. So it must be really hard.
Now here is my question: Instead of assign every edge a positive weight k>0, what if we simply assign negative weights to each edge with weight-k? Then each "longest path" would automatically be the shortest path, and if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path. Hence using a quite common trick we "can" turn the longest path problem into the shortest path problem.
Can someone point out my mistake in the reasoning? I figure something must be wrong but do not really know what it is. It is extremely unlikely my argument is correct, for obvious reasons. Reading the pseudo code provided in the book, it seems the algorithm for shortest path does not prohibit using negative weights, and after all everything can be "elevated" by adding a large enough constant to make them positive. I am a beginner in algorithms so this question might be trivial to experts.
The mistake in your reasoning is this line:
if there is no loops in the shortest path by definition, there should not be any loops in the corresponding longest path.
If you have a graph where all edges have negative weight (which can happen in the transformation you're describing) and there's a negative cycle, then there is no shortest path between any two nodes on that cycle, since any path between them can have its cost reduced by following the cycle more and more times. Since there is no shortest path in that case, your reasoning breaks down.
Now, you can argue that you should instead look for the shortest simple path between the nodes (that is, a path with no duplicated edges). Unfortunately, though, that problem is also NP-hard, so this reduction doesn't actually buy you anything.
Hope this helps!
if there is no loops in the shortest path by definition, there should
not be any loops in the corresponding longest path
...and so that particular original problem is solvable in polynomial time. And if the particular original problem is a DAG then this method solves it in linear time.
The complexity statement doesn't say that all graphs are that hard to solve, just that some are.
From your own link:
In contrast to the shortest path problem, which can be solved in
polynomial time in graphs without negative-weight cycles, the longest
path problem is NP-hard, meaning that it cannot be solved in
polynomial time for arbitrary graphs unless P = NP.
[...]
For most graphs, this transformation is not useful because it creates
cycles of negative length in −G. But if G is a directed acyclic graph,
then no negative cycles can be created, and longest paths in G can be
found in linear time by applying a linear time algorithm for shortest
paths in −G, which is also a directed acyclic graph.
During a course in University concerning graph theory, we were talking about finding shortest paths thus Dijkstra's algorithm came up, at that point I should mention that the edges of the graph were weighted, with weights>0. Then the professor asked how we could find the shortest path if the edges weren't weighted, I thought the same algorithm would do, since the edges had the "same" non-negative weight. But he suggested BFS. Is this true? wouldn't Dijkstra work correct? I'm not questing BFS finding the path but since it is exhaustive I thought maybe it would be better to avoid it.
Dijsktra worked fine for me even with non-wighted graphs. Every connection just has weight 1.
I have a random undirected social graph.
I want to find a Hamiltonian path if possible. Or if not possible (or not possible to know if possible in polynomial time) a series of paths. In this "series of paths" (where all N nodes are used exactly once), I want to minimize the number of paths and maximize the average length of the paths. (So no trivial solution of N paths of a single node).
I have generated an adjacency matrix for the nodes and edges already.
Any suggestions? Pointers in the right direction? I realize this will require heuristics because of the NP-complete (?) nature of the problem, and I am OK with a "good enough" answer. Also I would like to do this in Java.
Thanks!
If I'm interpreting your question correctly, what you're asking for is still NP-hard, since the best solution to the "multiple paths" problem would be a Hamiltonian path, and determining whether one exists is known to be NP-hard. Moreover, even if you're guaranteed that a Hamiltonian path doesn't exist, solving this problem could still be NP-hard, since I could give you a graph with a single disconnected node floating in space, for which the best solution is a trivial path containing that node and a Hamiltonian path in the remaining graph. As a result, unless P = NP, there isn't going to be a polynomial-time algorithm for your problem.
Hope this helps, and sorry for the negative result!
Angluin and Valiant gave a near linear-time heuristic that works almost always in a sufficiently dense Erdos-Renyi random graph. It's described by Wilf, on page 121. Probably your random graph is not Erdos-Renyi, but the heuristic might work anyway (when it "fails", it still gives you a (hopefully) long path; greedily take this path and run A-V again).
Use a genetic algorithm (without crossover), where each individual is a permutation of the nodes. This gives you "series of paths" at each generation, evolving to a minimal number of paths (1) and a maximal avg. length (N).
As you have realized there is no exact solution in polynomial time. You can try some random search methods though. My recommendation, start with genetic algorithm and try out tabu search.