Maximum cost and path algorithms for weighted directed graphs - algorithm

What algorithms are there, for weighted directed graphs, to find the maximum cost and path for going from a vertex A to a vertex K ?
I was thinking of modfied Dijkstra, but while watching and learning this algorithm, I found out it can't be used with negative weights and can't be used to find the maximum cost.

I suggest the following: choose any algorithm for minimum cost(distance) and also works with negative edges(thus Dijkstra can not be used for this). Then run this algorithm using the negation of the cost for each edge. You can use Bellman–Ford algorithm for instance.

You could use a modified version of the A* (A-star) algorithm. I say modified but it wouldn't actually be modified. You just need an appropriate heuristic. It is a path finding algorithm, you would just need to set your heuristic to chooses the costliest path.
A* works by starting at some vertex V, and adding all of that vertex's adjacent neighbors to an open list. Then it moves, in your case, to the node with the highest cost. The previously visited node is moved to a closed list. Then all the adjacent nodes to the current node are added to the open list. And so on and so forth.
It will find your K, and if your heuristic is to always choose the costliest path, you will have the maximum cost route.
Here is A* being applied to Infinite Mario.

Related

Does Dijkstra's algorithm always return the "shortest" (least number of edges) path?

There are two functions that I wish to minimize:
a. the number of "obstacles" on the path (assume each obstacle increases the cost); and
b. total number of edges between the source and the destination.
If I had to minimize just (a), I would have used Dijkstra's algorithm; if I had to minimize just (b), I would have used BFS.
But given that I have to minimize both, can I use Dijkstra's algorithm only? In other words, if I find the path with the least cost from the obstacles, does Dijkstra's algorithm also guarantee that the path length thus obtained (between source and destination) would be the shortest?
When discussing paths on weighted graphs, the term "shortest path" means the path with the lowest total cost. Think of the weights as distances. This is the path that Dijkstra's algorithm will find.
You can use any cost function you like, as long as the cost to get from one vertex to another is always positive or zero. As mentioned in comments, however, you can only minimize one function at a time. This is a general fact that has nothing to do with Dijkstra's algorithm.
The cost function that you seem to suggesting is perfectly fine -- the cost to move to a normal vertex is 1, while the cost to move to an "obstacle" vertex is higher. Dijkstra's algorithm is the appropriate way to find a path with lowest total cost.

Modified Dijkstra's Algorithm

We are given a directed graph with edge weights W lying between 0 and 1. Cost of a path from source to target node is the product of the weights of edges lying on the path from source to target node. I wanted to know of an algorithm which can find the minimum cost path in polynomial time or using any other heuristic.
I thought along the lines of taking the log values of the edges weights (taking mod values) and then applying dijkstra for this graph but think there will be precision problems which can't be calculated.
Is there any other better way or can I improve upon the log approach.
In Dijkstra's algorithm, when you visit a node you know that there is no shorter road to this node. This is not true if you multiply the edges with weights between 0..1 as if you visit more vertices you will get a smaller number.
Basically this is equivalent of finding the longest path in a graph. This can be seen also by using your idea of taking logarithms, as the logarithm of a number between 0 and 1 is negative. If you take absolute values of the logarithms of the weights, the longest path corresponds to the shortest path in the multiplicative graph.
If your graph is acyclic there is a straightforward algorithm (modified from Longest path problem).
Find a Topological ordering of the DAG.
For each vertex you need to store the cost of path. Initialize this to one at the beginning.
Travel through the DAG in topological order starting from your start vertex. In each vertex check all the children and if the cost is smaller than previously, update it. Store also the vertex where you arrive at this vertex with the lowest cost.
After you reach your final vertex, you can find the "shortest" path by travelling back from the end vertex using the stored vertices.
Of course, if you graph is not acyclic you can always reach a zero end cost by repeating a loop infinitely.

What is difference between BFS and Dijkstra's algorithms when looking for shortest path?

I was reading about Graph algorithms and I came across these two algorithms:
Dijkstra's algorithm
Breadth-first search
What is the difference between Dijkstra's algorithm and BFS while looking for the shortest-path between nodes?
I searched a lot about this but didn't get any satisfactory answer!
The rules for BFS for finding shortest-path in a graph are:
We discover all the connected vertices,
Add them in the queue and also
Store the distance (weight/length) from source u to that vertex v.
Update with path from source u to that vertex v with shortest distance and we have it!
This is exactly the same thing we do in Dijkstra's algorithm!
So why are the time complexities of these algorithms so different?
If anyone can explain it with the help of a pseudo code then I will be
very grateful!
I know I am missing something! Please help!
Breadth-first search is just Dijkstra's algorithm with all edge weights equal to 1.
Dijkstra's algorithm is conceptually breadth-first search that respects edge costs.
The process for exploring the graph is structurally the same in both cases.
When using BFS for finding the shortest path in a graph, we discover all the connected vertices, add them to the queue and also maintain the distance from source to that vertex. Now, if we find a path from source to that vertex with less distance then we update it!
We do not maintain a distance in BFS. It is for discovery of nodes.
So we put them in a general queue and pop them. Unlike in Dijikstra, where we put accumulative weight of node (after relaxation) in a priority queue and pop the min distance.
So BFS would work like Dijikstra in equal weight graph. Complexity varies because of the use of simple queue and priority queue.
Dijkstra and BFS, both are the same algorithm. As said by others members, Dijkstra using priority_queue whereas BFS using a queue. The difference is because of the way the shortest path is calculated in both algorithms.
In BFS Algorithm, for finding the shortest path we traverse in all directions and update the distance array respectively. Basically, the pseudo-code will be as follow:
distance[src] = 0;
q.push(src);
while(queue not empty) {
pop the node at front (say u)
for all its adjacent (say v)
if dist[u] + weight < dist[v]
update distance of v
push v into queue
}
The above code will also give the shortest path in a weighted graph. But the time complexity is not equal to normal BFS i.e. O(E+V). Time complexity is more than O(E+V) because many of the edges are repeated twice.
Graph-Diagram
Consider, the above graph. Dry run it for the above pseudo-code you will find that node 2 and node 3 are pushed two times into the queue and further the distance for all future nodes is updated twice.
BFS-Traversal-Working
So, assume if there is lot more nodes after 3 then the distance calculated by the first insertion of 2 will be used for all future nodes then those distance will be again updated using the second push of node 2. Same scenario with 3.
So, you can see that nodes are repeated. Hence, all nodes and edges are not traversed only once.
Dijkstra Algorithm does a smart work here...rather than traversing in all the directions it only traverses in the direction with the shortest distance, so that repetition of updation of distance is prevented.
So, to trace the shortest distance we have to use priority_queue in place of the normal queue.
Dijkstra-Algo-Working
If you try to dry run the above graph again using the Dijkstra algorithm you will find that nodes are push twice but only that node is considered which has a shorter distance.
So, all nodes are traversed only once but time complexity is more than normal BFS because of the use of priority_queue.
With SPFA algorithm, you can get shortest path with normal queue in weighted edge graph.
It is variant of bellman-ford algorithm, and it can also handle negative weights.
But on the down side, it has worse time complexity over Dijkstra's
Since you asked for psuedocode this website has visualizations with psuedocode https://visualgo.net/en/sssp

Can I use Dijkstra's shortest path algorithm in my graph?

I have a directed graph that has all non-negative edges except the edge(s) that leave the source (S). There are no edges from any other vertices to the source. To find the shortest distance from source (S) to a vertex (T) in the graph, can I use Dijkstra's shortest path algorithm even though the edges leaving the source is negative?
Assuming only source-adjecent edges can have negative weights and there is no path back to the source from any of the source-adjecent nodes (as mentioned in the comment), you can just add a constant C onto all edges leaving the source to make them all non-negative. Then subtract C from the final result.
On a more general note, Dijkstra can be used to solve shortest-path in any graph with negative edge weights (but no negative cycles) after applying Johnson's reweighting algorithm (which is essentially Bellman-Ford, but needs to be performed only once).
Yes, you can use Dijkstra on that type of directed graph.
If you use already finished alghoritm for Dijsktra and it cannot use negative values, it can be good practise to find the lowest negative edge and add that number to all starting edges, therefore there is no-negative number at all. You substract that number after finishing.
If you code it yourself (which is acutally pretty easy and I recommend it to you), you almost does not change anything, just start with lowest value (as usual for Dijkstra) and allow it, that lowest value can be negative. It will work in your case.
The reason you generally can't use Dijkstra's algorithm for (directed) graphs with negative links is that Dijkstra's algorithm is greedy. It assumes that once you pick a vertex with minimum distance, there is no way it can later be reached by a smaller paths.
In your particular graph, after the very first step, you traverse all possible negative edges and Dijkstra's assumption actually holds from now on. Regardless of the fact that those vertices directly connected to start now have negative values, once you identify which has the minimum distance, it can never be reached again with a smaller distance (since all edges you would traverse from this point on would have a positive distance).
If you think about the conditions that dijkstra's algorithm puts upon the edges for the algorithm to work it is only that they are never decreasing after initialisation.
Thus, it actually doesn't matter if the first step is negative as from those several points onwards the function is constantly increasing and thus the correct output will be found (provided there is no way to get back to the start square.).

What modifications could you make to a graph to allow Dijkstra's algorithm to work on it?

So I've been thinking, without resorting to another algorithm, what modifications could you make to a graph to allow Dijkstra's algorithm to work on it, and still get the correct answer at the end of the day? If it's even possible at all?
I first thought of adding a constant equal to the most negative weight to all weights, but I found that that will mess up everthing and change the original single source path.
Then, I thought of traversing through the graph, putting all weights that are less than zero into an array or somwthing of the sort and then multiplying it by -1. I think his would work (disregarding running time constraints) but maybe I'm looking at the wrong way.
EDIT:
Another idea. How about permanently setting all negative weights to infinity. that way ensuring that they are ignored?
So I just want to hear some opinions on this; what do you guys think?
Seems you looking for something similar to Johnson's algorithm:
First, a new node q is added to the graph, connected by zero-weight edges to each of the other nodes.
Second, the Bellman–Ford algorithm is used, starting from the new vertex q, to find for each vertex v the minimum weight h(v) of a path
from q to v. If this step detects a negative cycle, the algorithm is
terminated.
Next the edges of the original graph are reweighted using the values computed by the Bellman–Ford algorithm: an edge from u to v,
having length w(u,v), is given the new length w(u,v) + h(u) − h(v).
Finally, q is removed, and Dijkstra's algorithm is used to find the shortest paths from each node s to every other vertex in the
reweighted graph.
By any algorithm, you should check for negative cycles, and if there isn't negative cycle, find the shortest path.
In your case you need to run Dijkstra's algorithm one time. Also note that in Johnson's algorithm semi Bellman–Ford algorithm runs just for new added node. (not all vertices).

Resources