I want to solve a variation of shortest path algorithm. I can't figure out how to deal with additional constraints.
Few cities (<=50) are given along with two (N * N) matrices
denoting travel time between cities and toll between cities. Now given
a time t (<10000), we have to choose a path to reach from city 0
to city N-1 such that toll cost is minimum and we complete travel
within given time t.
I know that with only one parameter such as only time, we can use shortest path algorithm such as Bellman–Ford algorithm or Dijkstra's algorithm. But how to modify it so to include two constraints? How can we formulate Dynamic Programming solution for the problem?
I am trying to solve it with DP + complete search. Am I in right direction, or are there better algorithms than these approach?
It is possible to use Dijkstra for this problem, first you need to create a graph of state, with each state
represents the city and time left. So between each state (city A, time t ) and state (city B , time t1), there can only be an edge if you can move from city A to city B with the given time is (t1 - t). And the value for each edges will be the toll. Solving this using standard Dijkstra is simple.
Related
Few cities (<=50) are given along with two (N * N) matrices denoting travel time between cities and toll between cities. Now given a time t (<10000), we have to choose a path to reach from city 0 to city N-1 such that toll cost is minimum and we complete travel within given time t.
I am thinking of using A star algorithm to solve the above question. How to I satisfy both requirement by combining them into a heuristic function ?
I think you can use a simple distance score calculation for the heuristic, and use both the tolls and time for cost calculation, as c0der mentioned.
Your A* search should compare all the current paths dynamically though. Remove any loops as usual, and remove the more costly path if two paths reach the same point with a better cost. After each iteration, as the paths list gets reordered by their heuristic scores, better-doing paths will get brought up in that list.
I am recently learning about graph algorithms and at my university we were taught, that the result of Bellman-Ford is a table of distances from all nodes to all other nodes (all-pairs shortest paths). However I did not understand how this is achieved by the algorithm and tried to understand it by watching YouTube videos and looking up definitions in Wikipedia and so forth...
Now here comes the problem:
I could not find resources that described the algorithm in a way that the result would be the all pairs shortest paths table, but only "from one node to all other nodes".
Can the Bellman-Ford algorithm be tweaked to achieve the all pairs shortest paths table or is my university-lecturer completely wrong about this? (He did explain some algorithm that delivered all pairs shortest paths and he called it Bellman-Ford, however I think this can not be Bellman Ford)
EDIT: I absolutely understand the Bellman-Ford algorithm for the Problem "shortest path from one node to all other nodes".
I also understand most of the algorithm that was taught at my university for "all pairs shortest paths".
I am just very confused since the algorithm at my university was also called "Bellman-Ford".
If you speak German: Here is a video where the university lecturer talks about his "Bellman-Ford" (which I think is not actually Bellman-Ford):
https://www.youtube.com/watch?v=3_zqU5GWo4w&t=715s
Bellman Ford is algorithm for finding shortest path from given start node to any other node in the graph.
Using Bellman Ford we can generate all pairs shortest paths if we run the bellman ford algorithm from each node and then get the shortest paths to all others, but the worse case time complexity of this algorithm will be O(V * V * E) and if we have complete graph this complexity will be O (V^4), where V is the number of vertexes (nodes) in the graph and E is the number of edges in the graph.
There is better algorithm for finding all pairs shortest paths which works in O(V^3) time complexity. That is the Floyd Warshall algorithm.
Here you can read more about it: https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm
The aim of the algorithm is to find the shortest path from the starting point to the ending point.
To do that, it finds the shortest distance from the all points to every other point and then selects the path that leads to the solution and also adds up to the shortest.
To begin with, it starts with the starting point (A). Sets every point's cost to infinity.
Now it sees all the possible directions from A. And A's initial cost is set to zero.
Imagine it needs to travel to only B. One there might be a straight path that connects B to A and its cost is say, 10.
But there is also a path via C. From A to C it takes say cost 5 and from C to B it takes cost only 2. This means that there are two possible paths from A to B. One that has cost 10 and the other 5+2 i.e. 7 . So it shall update the cost of reaching B from A to 7 not 10 and hence the path shall be selected.
You can imagine this same situation but with many more points. It shall search from starting point to reach the end point traversing all the possible paths and updating/not updating the cost as and when needed. In the end it shall look for all the paths and select the one that has the smallest cost.
Now here comes the problem: a I could not find resources that
described the algorithm in a way that the result would be the all
pairs shortest paths table, but only "from one node to all other
nodes".
To understand that, imagine we have to reach A to D.
The individual cost of moving from one point to another is listed below
A to B :15
A to C :10
B to C :3
B to D :5
C to D :15
Initially set all points to infinity except A to zero.
First,
A->B : Cost=15(Update B's cost to 15)
A->C : Cost=10(Update C's cost to 10)
B->C : Cost=18(B's cost plus B->C alone cost, so do not update as C's cost as is already smaller than this)
C->B : Cost=13(C's cost plus C->B alone cost, update B's cost to 13 as this is smaller than 15)
B->D : Cost=18(B's new cost plus B->D cost alone, update D's cost as this smaller than infinity)
C->D : Cost=25(C's cost plus C->D cost alone, do not update D)
So the path that the algorithm chooses is the one that lead to D with the cost of 18 which comes out to be the smallest cost!
B
/ | \
A | D
\ | /
C
A->C->B->D Cost:18
Now you may read this link for better understanding. And things should be pretty clear.
I asked in our university forum and got the following answer:
Bellman-Ford is originally "from one node". The invariant (idea under the hood of the algorithm) however does not change when applying the original Bellman-Ford algorithm to every node of the Graph.
Complexity of the original Bellman-Ford is O(V^3) and if started from every Node it would be O(V^4). However there is a trick that one can use because the findings during the algorithm resemble matrix multiplications of the input matrix (containing direct path lengths) with itself. Because this is a mathematical ring one can cheat and simply calculate matrix^2, matrix^4, matrix^8 and so on (This is the part I did not completely understand though) and one can achieve O(V^3 * log V).
He called this algorithm Bellman-Ford as well, because the invariant/ idea behind the algorithm is still the same.
German answer in our public university forum
Hi I have an optimization problem where I have n days to travel to k cities and I have to plan my travel such that my total cost of travel is minimized.
The cost of travel between any 2 cities u and v, is dependent on the day when i decide to travel( so cost of travel between u and v is a function f(u,v,n) with n being the day when I am travelling) and I can travel only once a day.
I can also choose to stay in the same city.
Is there a way to solve this through a shortest path algorithm?
This is an NP-Complete problem (since it reduces from the Hamiltonian Path). The only major difference between this and the standard Traveling Salesman Problem is that the edge weights are dynamic. This means that you face a one time preprocessing cost of O(VVE!) complexity and that the entire cycle can be solved in a O(V^3) worst case time.
I was able to find some details of a similar problem in this paper published in IEEE, which describes the Intelligent Transportation-TSP problem.
There is a given set of cities lets say.. A,B,C,D,E,F,G. The problem is to find the minimum cost path that covers the cities A,B,C,F. It is essential that the path covers the cities A,B,C,F. The path can, (but does not have to) go through any of the other given cities (D,E,G). Repeating a path is allowed. The path should start and end at A.
How should i go about tackling a problem along similar lines?
That's a variant of Travelling Salesman Problem (TSP) in disguise.
You can see that, if you mark every city as "needed to be covered" (I'll call those "interesting" henceforth). The variant of TSP, where you are allowed to visit a node more than once, is still NP-complete.
So, knowing that the complexity of every exact solution to your problem would be exponential in the number of interesting cities, you can procede as follows:
First, precompute the shortest paths between interesting cities. This can be done with Dijkstra's algorithm run from every interesting city or Floyd-Warshall algorithm. Then either try every permutation of the order of visiting interesting cities; or use some existing TSP solver or heuristic algorithm.
So the simplest implementation goes like this:
Apply Floyd-Warshall to the city graph. It's much simpler to implement than Dijkstra's. I've found a nice PDF with their comparison. It gets you the matrix with all the shortest paths for AB, AC, AF, BC, BF and CF. If you need to get the actual path as in sequence of cities, look at Path reconstruction section in Wikipedia.
Try every permutation of the order of visiting interesting cities except A (i.e., only B, C and F). If you use C++, Python or Ruby, they have permutation function in the standard library. For other languages you may want to use a third-party library or search Stack Overflow for an algorithm.
Find the permutation with the lowest total cost of a path. E.g., for permutation C-F-B, the total cost is AC+CF+FB+BA. You already have all those values from Floyd-Warshall, so you can simply sum them.
If you have V total cities and N interesting cities, the runtime of this implementation will be about O(V3 + N!·N)
In this earlier question the OP asked how to find a shortest path in a graph that goes from u to v and also passes through some node w. The accepted answer, which is quite good, was to run Dijkstra's algorithm twice - once to get from u to w and once to get from w to v. This has time complexity equal to two calls to Dijkstra's algorithm, which is O(m + n log n).
Now consider a related question - you are given a sequence of nodes u1, u2, ..., uk and want to find the shortest path from u1 to uk such that the path passes through u1, u2, ..., uk in order. Clearly this could be done by running k-1 instances of Dijkstra's algorithm, one for each pair of adjacent vertices, then concatenating the shortest paths together. This takes time O(km + k n log n). Alternatively, you could use an all-pairs shortest paths algorithm like Johnson's algorithm to compute all shortest paths, then concatenate the appropriate shortest paths together in O(mn + n2 log n) time, which is good for k much larger than n.
My question is whether there is an algorithm for solving this problem that is faster than the above approaches when k is small. Does such an algorithm exist? Or is iterated Dijkstra's as good as it gets?
Rather than running isolated instances of Dijkstra's algorithm to find the paths u(k) -> u(k+1) one path at a time, can a single instance of a modified Dijkstra-like search be started at each node in the sequence simultaneously, with the paths formed when search regions meet "in-the-middle".
This would potentially cut down on the total number of edges visited and reduce re-traversal of edges compared to making a series of isolated calls to Dijkstra's algorithm.
An easy example would be finding the path between two nodes. It would be better to expand the search regions about both nodes than just expanding about one. In the case of a uniform graph, the second option would give a search region with radius equal to the distance between the nodes, the first option would give two regions of half the radius - less overall search area.
Just a thought.
EDIT: I guess I'm talking about a multi-directional variant of a bi-directional search, with as many directions as there are nodes in the sequence {u(1), u(2), ..., u(m)}.
I don't see how we can do too much better, here's all I can think of. Assuming the graph is undirected, the shortest path from any node u to node v would be the same as that from v to u (in reverse of course).
Now, for your case of a shortest path in the order u1 u2 u3.. un, we could run Djikstra's algorithm on u2 (and find the shortest paths u1-u2, u2-u3 in one run), then on u4 (for u3-u4 and u4-u5), then u6.. and so on. This reduces the number of times you apply Djikstra's by roughly a half. Note that complexity wise, this is identical to the original solution.
You get the shortest path from one vertex to all the other in the graph by one call to Dijkstra's algorithm. Thus you only need to do a search for each unique starting vertex so repeated vertices does not make the problem any harder.