I have a shortest path problem that I need to solve, and I'm fairly certain that there should be a efficient solution
I have this fun representation of a graph:
as ascii:
input
. N N N N N N
W 7 9 8 8 7 5 N . . . N N N
W 2 2 2 1 1 6 6 N N N 5 5 N E
W 1 2 3 2 2 2 2 4 5 5 4 2 5 E
. S S S 3 3 3 2 6 5 4 2 2 2 E
. . . . S S 2 2 2 2 3 7 2 2 E
. . . . . . S 2 3 2 7 7 7 7 E
. . . . . . . S 7 7 7 7 7 7 E
. . . . . . . S 7 7 7 S S 7 E
. . . . . . . . S 7 S . . S
. . . . . . . . . S
Output:
35
I need to find the shortest a path from one of the 'E' spots to one of the 'W' spots, walking over the numbered spots. We are not able to walk on the 'N' and 's' spots. When we stand at a spot, we´re able to walk up, down,left and right. I need to find the shortest path in terms of the numbered squares that I am walking on. Here is a more simple example:
I would create a double directed DAG with all edges going towards a numbered edge, having that number as weight, and all edges going to E or W having weight 0:
my attempt
Now this is a case of finding a shortest path from multiple sources to muliple sinks.
My naive thought is that I could run Dijkstra from every w, to every E. This would however run in something like O(W*dijkstra^E) (where E is the amount of E nodes)
Is there any smarter way to do a multi-source multi-sink dijsktra?
After giving this some thought, and I think I have come up with a solution myself. The accepted answer is completely valid and good, but I think should run faster than A bfs that in the worst case needs to evaluate each edge E times.
Here goes:
I connect all E nodes with a source, with edges going from the source (s) to E, with weight 0.
All W nodes are also connected to a sink (t), edges going from W to sink.
This means that the beforeshown graph will look like this:
Now I should be able just to run a regular ol' dijkstra from s to t
A solution was mentioned in the comments but bears conversion to an answer: run a normal BFS but re-enqueue any previously visited nodes which can be reached with lower cost than previously thought. Re-enqueing a newly-discovered cheaper path to a visited node lets us recompute its neighbor paths as well.
The downside is that BFS will explore all nodes at least once, so the optimality depends on the type of graph you have--if it's a graph with many starting and ending points relative to the size of the graph as in your example, this is good, whereas running multiple Dijkstra's becomes appealing as the number of sources and sinks diminishes relative to the size of the graph.
Here's the code:
const findAllCoords = (G, tgt) =>
G.reduce((a, row, ri) => {
const res = row.reduce((a, cell, ci) =>
cell === tgt ? [...a, [ci, ri]] : a
, []);
return res.length ? [...a, ...res] : a;
}, [])
;
const shortestPathMultiSrcMultiDst = (G, src, dst) => {
const key = ([x, y]) => `${x} ${y}`;
const dstCoords = findAllCoords(G, dst);
const visited = Object.fromEntries(
dstCoords.map(e => [key(e), Infinity])
);
const neighbors = ([x, y]) =>
[[0, -1], [-1, 0], [1, 0], [0, 1]]
.map(([xx, yy]) => [x + xx, y + yy])
.filter(([x, y]) =>
G[y] && (!isNaN(G[y][x]) || G[y][x] === dst)
)
;
const q = findAllCoords(G, src).map(e => [e, 0]);
while (q.length) {
let [xy, cost] = q.shift();
const [x, y] = xy;
const k = key(xy);
cost += isNaN(G[y][x]) ? 0 : +G[y][x];
if (!(k in visited) || cost < visited[k]) {
visited[k] = cost;
q.push(...neighbors(xy).map(e => [e, cost]));
}
}
return Math.min(...dstCoords.map(e => visited[key(e)]));
};
const G = `
. N N N N N N
W 7 9 8 8 7 5 N . . . N N N
W 2 2 2 1 1 6 6 N N N 5 5 N E
W 1 2 3 2 2 2 2 4 5 5 4 2 5 E
. S S S 3 3 3 2 6 5 4 2 2 2 E
. . . . S S 2 2 2 2 3 7 2 2 E
. . . . . . S 2 3 2 7 7 7 7 E
. . . . . . . S 7 7 7 7 7 7 E
. . . . . . . S 7 7 7 S S 7 E
. . . . . . . . S 7 S . . S
. . . . . . . . . S
`.trim().split("\n").map(e => e.split(" "))
;
console.log(shortestPathMultiSrcMultiDst(G, "W", "E"));
OP shared a better answer that simply turns the problem into a regular Dijkstra-solvable graph by connecting all sources to a node and all sinks to a node, rendering this solution pretty much pointless as far as I can tell.
Related
Suppose I had a node path where the cost of travelling between each node is uniform. I'm trying to find the closest node that 2 or more nodes can travel to. Closest being measured as the cumulative cost of reaching the common node from all start points.
If I wanted to find the closest common node to node A and B, that node would be E.
A -> E (2 cost)
B -> E (1 cost)
If I wanted to find the closest common node to node A, B, C, that node would be F.
A -> F (3 cost)
B -> F (2 cost)
C -> F (1 cost)
And if I wanted to find the closest common node between node G, E, no node is possible.
So there should be two outputs: either the closest node or an error message stating that it cannot reach one another.
I would appreciate if I could be given a algorithm that can achieve this. A link to a article, psudocode or any language is fine, below is some python code that represents the graph above in a defaultdict(list) object.
from enum import Enum
from collections import defaultdict
class Type(Enum):
A = 1
B = 2
C = 3
D = 4
E = 5
F = 6
G = 6
paths = defaultdict(list)
paths[Type.A].append(Type.D)
paths[Type.D].append(Type.G)
paths[Type.D].append(Type.E)
paths[Type.B].append(Type.E)
paths[Type.E].append(Type.F)
paths[Type.C].append(Type.F)
Thanks in advance.
Thanks to #VincentvanderWeele for the suggestion:
Example cost of all nodes from A, B
A B C D E F G
___________________
A 0 X X 1 2 3 2
B X 1 X X 2 2 X
As an optimisation when working out the 2nd+ node you can skip any nodes that the previous nodes can not travel to, e.g.
A B C D E F G
___________________
A 0 X X 1 2 3 2
B X X X X 2 2 X
^
Possible closest nodes:
E = 2 + 2 = 4
F = 2 + 3 = 5
Result is E since it has the lowest cost
We are given an array A of integers. I want to find 2 contiguous subarrays of the largest length(both subarrays must be equal in length) that have the same weighted average. The weights are the positions in the subarray. For example
A=41111921111119
Subarrays:: (11119) and (11119)
Ive tried to find the weighted average of all subarrays by DP and then sorting them columnwise to find 2 with same length.But I cant proceed further and my approach seems too vague/bruteforce.I would appreciate any help. Thanks in advance.
The first step should be to sort the array. Any pairs of equal values can then be identified and factored out. The remaining numbers will all be different, like this:
2, 3, 5, 9, 14, 19 ... etc
The next step would be to compare pairs to their center:
2 + 5 == 2 * 3 ?
3 + 9 == 2 * 5 ?
5 + 14 == 2 * 9 ?
9 + 19 == 2 * 14 ?
The next step is to compare nested pairs, meaning if you have A B C D, you compare A+D to B+C. So for the above example it would be:
2+9 == 3+5 ?
3+15 == 5+9 ?
5+19 == 9+14 ?
Next you would compare triples to the two inside values:
2 + 3 + 9 == 3 * 5 ?
2 + 5 + 9 == 3 * 3 ?
3 + 5 + 14 == 3 * 9 ?
3 + 9 + 14 == 3 * 5 ?
5 + 9 + 19 == 3 * 14 ?
5 + 14 + 19 == 3 * 9 ?
Then you would compare pairs of triples:
2 + 3 + 19 == 5 + 9 + 14 ?
2 + 5 + 19 == 3 + 9 + 14 ?
2 + 9 + 19 == 3 + 5 + 14 ?
and so on. There are different ways to do the ordering. One way is to create an initial bracket, for example, given A B C D E F G H, the initial bracket is ABGH versus CDEF, ie the outside compared to the center. Then switch values according to the comparison. For example, if ABGH > CDEF, then you can try all switches where the left value is greater than the right value. In this case G and H are greater than E and F, so the possible switches are:
G <-> E
G <-> F
H <-> E
H <-> F
GH <-> EF
First, as the length of the two subarray must be equal, you can consider the length from 1 to n step by step.
For length i, you can calculate the weighted sum of every subarray in a total complexity of O(n). Then sort the sums to determine if there's an equal pair.
Because you sort n times the time would be O(n^2 log n) while the space is O(n).
Maybe I just repeated your solution mentioned in the question? But I don't think it can be optimized any more...
Saw the following puzzle on HN and thought I would repost here. It can be solved using Simplex, but I was wondering if there is a more elegant solution, or if someone can prove NP-completeness.
Each dot below represents the position of a laser. Indicate the direction that the laser should fire by replacing the dot with ^, v, <, or >. Each grid position i,j should be hit by exactly grid[i][j] lasers. In the example below, grid position 0,0 should be hit by exactly grid[0][0] = 2 lasers.
A laser goes through everything in its path including other guns (without destroying those guns).
2 2 3 . 1 . 2 2 3
1 . 2 1 1 . 1 . 2
2 3 . 1 . 2 . 4 .
. 3 . 2 2 . 2 3 4
1 . 2 . 2 3 2 . .
2 3 . 3 . 3 2 2 .
3 . 2 4 2 . 2 . 2
1 1 . . 1 3 . 2 .
. 2 1 . 2 . 1 . 3
If it can be solved with Simplex (Linear Programming) it isn't NP-complete.
I am working on a project where I need to solve a maze using the minimum number of right turns and no left turns.
The distance travelled is irrelevant as long as right turns are minimized. We are asked to implement our program using both a backtracking algorithm and an optimal (time) one.
For the backtracking algorithm I was going use a stack. My algorithm would be something like:
Push all four possible starting directions on the stack.
Follow a path, going straight whenever possible.
If we reach the end of the maze return the current path length as the best.
If we reach a dead end backtrack to the last possible right turn and take it.
If the current path length is greater than the current best, backtrack to the last possible right turn and take it.
I was wondering if anyone could point me in the direction of an optimal algorithm for this.
I'm having a tough time thinking of anything better than the backtracking one.
I think you can do it by first finding all the points that are reachable with 0 right turns. Then with just 1 right turn, and so on. You can use a queue for doing that. Note that in the n-th phase you've got optimal solutions for all the points that can be reached with n right turns. More so - any not yet reached point is reachable with > n points or not reachable at all. In order to achieve optimal time complexity you have to use the fact that you need to search for new reachable points only from those reached points, which have an unreached neighbour in the appropriate direction. As every point has only 4 neighbours you will only search from it 4 times. You can implement it by maintaining a separate list for every direction D containing all the reached points with an unreached neighbour in that direction. This gives you a time complexity proportional to the area of the maze that is proportional to the size of your input data.
Below I present a graphical example:
. . . . . . (0) . . . . . 0 1 1 1 1 (1) 0 1 1 1 1 1
. ####### . . 0 ########## . 0 ########## . 0 ########## 2
. # . # . . 0 # . # . . 0 # . # . . 0 # . # . (2)
. # . . . . 0 # . . . . 0 # . . . . 0 # . . . (2)
. #### . # . 0 #### . # . 0 #### . # . 0 #### . # 2
(.) . . . . . (0) . . . . . 0 1 1 1 1 1 0 1 1 1 1 1
0 1 1 1 1 1 0 1 1 1 1 1
0 ########## 2 0 ########## 2
0 # . # 3 2 0 # 4 # 3 2
0 # (3) 3 3 2 0 # 3 3 3 2
0 #### . # 2 0 #### 4 # 2
0 1 1 (1) 1 1 0 1 1 1 1 1
( ) denote reached points with the appropriate neighbour unreached
Build a graph by constructing four nodes for every position in the maze. Every node will correspond to a particular direction: N, S, E, W.
There will be edges between every node and at most three of its adjacent neighbors: the ones to the "front", "back" and "right" (if they exist). The edge leading to the node in the "front" and "back" will have weight 0 (since the path length doesn't matter), whereas the edge to the node in the "right" will have weight 1 (this is what represents the cost of making a right turn).
Once the graph is set up (and probably the best way to do this is to reuse the original representation of the maze) a modified variant of the breadth-first search algorithm will solve the problem.
The trick to handle the 0/1 edge weights is to use a deque instead of the standard queue implementation. Specifically, nodes reached through 0 weight edges will be pushed in the front of the deque and the ones reached through edges of weight 1 will be pushed in the back.
a grid of NxN is given. each point is assigned a value say num
starting from 1,1 we have to traverse to N,N.
if i,j is current position we can go right or down.
How to find the min sum of digits by traversing from 1,1 to n,n along any path
any two points can have same number
ex
1 2 3
4 5 6
7 8 9
1+2+3+6+9 = 21
n <=10000000000
Output 21
Can someone explain how to approach the problem?
This is a dynamic programming problem. The subproblem here is the minimum cost/path to get to any given square. Because you can only move down and to the right, there are only two squares that can let you enter a given square, the one above and the one to the left. Therefore the cost of getting to a square (i,i) is min(cost[i-1][i], cost[i][i-1]) + num. If this would put you out of bounds, only consider the option that is inside the grid. Calculate each row from left to right, doing the top row first and working your way down. The cost you get at (N,N) will be the minimal cost.
Here is my solution with dynamic - programming in O(n^2)
you start with (1,1) so you can find say a = (1,2) and b = (2,1) by a = value(1,1) + value(1,2). Then, to find (2,2) select the minimum (a+ value(2,2)) and (b + value(2,2)) and continue with this logic. You can find any minimum sum among (1,1) and (i,j) with that algorithm. Let me explain,
Given Matrix
1 2 3
4 5 6
7 8 9
Shortest path :
1 3 .
5 . .
. . .
so to find (2,2) take the original value(2,2)=5 from Given Matrix and select min(5 + 5), 3 + 5) = 8. so
Shortest path :
1 3 6
5 8 .
12 . .
so to find (3,2) select min (12 + 8, 8 + 8) = 16 and (2,3) = min(8 + 6, 6 + 6) = 12
Shortest path :
1 3 6
5 8 12
12 16 .
so the last one (3,3) = min (12 + 9, 16 + 9) = 21
Shortest path :
from (1,1) to any point (i, j)
1 3 6
5 8 12
12 16 21
You can convert the grid into a graph. The edges get the weights of the values from your grid elements. Then you can find the solution with the shortest path problem.
start--1--+--2--+--3--+
| | |
4 5 6
| | |
+--5--+--6--+
| | |
7 8 9
| | |
+--8--+--9--end
Can someone explain how to approach the problem?
Read about dynamic programming and go from there.
Attempt:
Start with the first row and calculate the cumulative values and store them.
Proceed to the second row, now the values could have only come from the left or top (since you can only go left or down), calculate the smallest of the cumulative values for this row.
Iterate down the rows until the last and you'll be able to get the smallest value when you reach the last node.
I claim this algorithm is O(n) since if you use a 2 dimensional array you only need to access all fields at most twice (read from top, left) for read and once for write.
If you want to go really fancy or have to operate on massive matrices, A* could also be an option.