I have been using Dijkstra Algorithm to find the shortest path in the Graph API which is given by the Princeton University Algorithm Part 2, and I have figured out how to find the path with Chebyshev Distance.
Even though Chebyshev can move to any side of the node with the cost of only 1, there is no impact on the Total Cost, but according to the graph, the red circle, why does the path finding line moves zigzag without moving straight?
Will the same thing will repeat if I use A* Algorithm?
If you want to prioritize "straight lines" you should take the direction of previous step into account. One possible way is to create a graph G'(V', E') where V' consists of all neighbour pairs of vertices. For example, vertex v = (v_prev, v_cur) would define a vertex in the path where v_cur is the last vertex of the path and v_prev is the previous vertex. Then on "updating distances" step of the shortest path algorithm you could choose the best distance with the best (non-changing) direction.
Also we can add additional property to the distance equal to the number of changing a direction and find the minimal distance way with minimal number of direction changes.
It shouldn't be straight in particular, according to Dijkstra or A*, as you say it has no impact on the total cost. I'll assume, by the way, that you want to prevent useless zig-zagging in particular, and have no particular preference in general for a move that goes in the same direction as the previous move.
Dijkstra and A* do not have a built-in dislike for "weird paths", they only explicitly care about the cost, implicitly that means they also care about how you handle equal costs. There are a couple of things you can do about that:
Use tie-breaking to make them prefer straight moves whenever two nodes have equal cost (G or F, depending on whether you're doing Dijkstra or A*). This gives some trouble around obstacles because two choices that eventually lead to equal-length paths do not necessarily have the same F score, so they might not get tie-broken. It'll never give you a sub-optimal path though.
Slightly increase your diagonal cost, it doesn't have to be a whole lot, say 10 for straight and 11 for diagonal. This will just avoid any diagonal move that isn't a shortcut. But obviously: if that doesn't match the actual cost, you can now find sub-optimal paths. The bigger the cost difference, the more that will happen. In practice it's relatively rare, and paths have to be long enough (accumulating enough cost-difference that it becomes worth an entire extra move) before it happens.
Related
I am doing a project to code the A* algorithm in the shortest path problem. In able to determine the shortest path using A* algorithm, I acknowledge that we have to get the heuristic value first. Do anyone know how to calculate and determine the heuristic value for each nodes? [i made up the map on my own so no heuristic values given]
A* and heuristic
A* always requires a heuristic, it is defined using heuristic values for distances. A* in principle is just the ordinary Dijkstra algorithm using heuristic guesses for the distances.
The heuristic function should run fast, in O(1) at query time. Otherwise you won't have much benefit from it. As heuristic you can select every function h for which:
h is admissible: h(u) <= dist(u, t) (never overestimate)
h is monotone: h(u) <= cost(u, v) + h(v) (triangle inequality)
There are however some heuristics that are frequently used in practice like:
Straight-line distance (as-the-crow-flies)
Landmark heuristic (pre-compute distances for all nodes to a set of selected nodes (landmarks))
Dependent on your application you might also find other heuristic functions useful.
Straight-line heuristic
The straight-line distance (or as-the-crow-flies) is straightforward and easy to compute. For two nodes v, u you know the exact location, i.e. Longitude and Latitude.
You then compute the straight-line distance by defining h as the Euclidean distance or if you want more precise results you don't ignore the fact that the earth is a sphere and use the Great-circle distance. Both methods run in O(1).
Landmark heuristic
Here you pre-select some important nodes in your graph. Ideally you always choose a node that is part of frequently used shortest-paths.
However that knowledge is often not available so you can just select nodes that are farthest to the other selected landmarks. You can do so by using greedy farthest selection (pick node which maximizes min_l dist(l, u) where l are already selected landmarks). Therefore you can do a Dijkstra from set which is very easy to implement. Just add multiple nodes at once into your Dijkstra starting queue (all current landmarks). Then you run the Dijkstra until all distances have been computed and pick the node with greatest shortest-path distance as next landmark. By that your landmarks are equally spread around the whole graph.
After selecting landmarks you pre-compute the distance from all landmarks to all other nodes and vice versa (from all nodes to all landmarks) and store them. Therefore just run a Dijkstra starting at a landmark until all distances have been computed.
The heuristic h for any node u, where v is the target node, then is defined as follows
h(u) = max_l(max(dist(u, l) - dist(v, l), dist(l, v) - dist(l, u)))
or for undirected graphs just
h(u) = max_l|dist(l, u) - dist(l, v)|
where max_l is a landmark which maximizes the argument.
After pre-computing said distances the method will obviously also run in O(1). However the pre-computation might take a minute or more but that should be no problem since you only need to compute it once and then never again at query time.
Note that you can also select the landmarks randomly which is faster but the results may vary in quality.
Comparison
Some time ago I created an image which compares some shortest-path computation algorithms I've implemented (PathWeaver at GitHub). Here's the image:
You see a query from top left to bottom right (inside the city). Marked are all nodes that where visited by the used algorithm. The less marks the faster the algorithm found the shortest-path.
The compared algorithms are
Ordinary Dijkstra (baseline, visits all nodes with that distance)
A* with straight-line heuristic (not a good estimate for road networks)
A* with landmarks (randomly computed) (good)
A* with landmarks (greedy farthest selected) (good)
Arc-Flags (okay)
Note that Arc-Flags is a different algorithm. It wants to have an area, like a rectangle around a city. It then selects all boundary nodes (nodes which are inside the rectangle but minimize distance to outside nodes). With those boundary nodes it performs a reversed Dijkstra (reverse all edges and then run Dijkstra). By that you efficiently pre-compute the shortest paths from all nodes to the boundary. Edges which are part of such a shortest path are then marked (arcs are flagged). At query time you run an ordinary Dijkstra but only consider marked edges. Therefore you follow shortest paths to the boundary.
This technique can be combined with others like A* and you can select many different rectangles, like all commonly searched cities.
There's also another algorithm I know (but never implemented though), it is called Contraction Hierarchies and it exploits the fact that you usually start at a small town road, then switch to a bigger road, then a highway and in the end vice versa until you reached your destination. Therefore it gives each edge a level and then first tries to as quickly as possible reach a high level and try to keep it as long as possible.
It therefore pre-computes shortcuts which are temporary edges that represent a shortest-path, like super-highways.
Bottom line
The right choice for a heuristic and also for an algorithm in general heavily depends on your model.
As seen for road networks, especially near smaller towns, the straight-line heuristic doesn't perform well since there often is no road that goes straight-line. Also for long distances you tend to first drive onto the highway which sometimes means driving into the opposite direction for some minutes.
However for games, where you often can move around where you like straight-line performs significantly better. But as soon as you introduce roads where you can travel faster (like by using a car) or if you have many obstacles like big mountains, it might get bad again.
Landmark heuristic performs well on most networks as it uses the true distance. However you have some pre-computation and trade some space since you need to hold all that pre-computed data.
Heuristic values are wholly domain-dependent, especially admissible ones (which A* requires). So, for example, finding the shortest path on a geographic map might involve a heuristic of the straight-line distance between two nodes, which could be pretty-well approximated by computing the Euclidean distance between the (latitude, longitude) of the two points.
I have a problem similar to the basic TSP but not quite the same.
I have a starting position for a player character, and he has to pick up n objects in the shortest time possible. He doesn't need to return to the original position and the order in which he picks up the objects does not matter.
In other words, the problem is to find the minimum-weight (distance) Hamiltonian path with a given (fixed) start vertex.
What I have currently, is an algorithm like this:
best_total_weight_so_far = Inf
foreach possible end vertex:
add a vertex with 0-weight edges to the start and end vertices
current_solution = solve TSP for this graph
remove the 0 vertex
total_weight = Weight (current_solution)
if total_weight < best_total_weight_so_far
best_solution = current_solution
best_total_weight_so_far = total_weight
However this algorithm seems to be somewhat time-consuming, since it has to solve the TSP n-1 times. Is there a better approach to solving the original problem?
It is a rather minor variation of TSP and clearly NP-hard. Any heuristic algorithm (and you really shouldn't try to do anything better than heuristic for a game IMHO) for TSP should be easily modifiable for your situation. Even the nearest neighbor probably wouldn't be bad -- in fact for your situation it would probably be better that when used in TSP since in Nearest Neighbor the return edge is often the worst. Perhaps you can use NN + 2-Opt to eliminate edge crossings.
On edit: Your problem can easily be reduced to the TSP problem for directed graphs. Double all of the existing edges so that each is replaced by a pair of arrows. The costs for all arrows is simply the existing cost for the corresponding edges except for the arrows that go into the start node. Make those edges cost 0 (no cost in returning at the end of the day). If you have code that solves the TSP for directed graphs you could thus use it in your case as well.
At the risk of it getting slow (20 points should be fine), you can use the good old exact TSP algorithms in the way John describes. 20 points is really easy for TSP - instances with thousands of points are routinely solved and instances with tens of thousands of points have been solved.
For example, use linear programming and branch & bound.
Make an LP problem with one variable per edge (there are more edges now because it's directed), the variables will be between 0 and 1 where 0 means "don't take this edge in the solution", 1 means "take it" and fractional values sort of mean "take it .. a bit" (whatever that means).
The costs are obviously the distances, except for returning to the start. See John's answer.
Then you need constraints, namely that for each node the sum of its incoming edges is 1, and the sum of its outgoing edges is one. Also the sum of a pair of edges that was previously one edge must be smaller or equal to one. The solution now will consist of disconnected triangles, which is the smallest way to connect the nodes such that they all have both an incoming edge and an outgoing edge, and those edges are not both "the same edge". So the sub-tours must be eliminated. The simplest way to do that (probably strong enough for 20 points) is to decompose the solution into connected components, and then for each connected component say that the sum of incoming edges to it must be at least 1 (it can be more than 1), same thing with the outgoing edges. Solve the LP problem again and repeat this until there is only one component. There are more cuts you can do, such as the obvious Gomory cuts, but also fancy special TSP cuts (comb cuts, blossom cuts, crown cuts.. there are whole books about this), but you won't need any of this for 20 points.
What this gives you is, sometimes, directly the solution. Usually to begin with it will contain fractional edges. In that case it still gives you a good underestimation of how long the tour will be, and you can use that in the framework of branch & bound to determine the actual best tour. The idea there is to pick an edge that was fractional in the result, and pick it either 0 or 1 (this often turns edges that were previously 0/1 fractional, so you have to keep all "chosen edges" fixed in the whole sub-tree in order to guarantee termination). Now you have two sub-problems, solve each recursively. Whenever the estimation from the LP solution becomes longer than the best path you have found so far, you can prune the sub-tree (since it's an underestimation, all integral solutions in this part of the tree can only be even worse). You can initialize the "best so far solution" with a heuristic solution but for 20 points it doesn't really matter, the techniques I described here are already enough to solve 100-point problems.
Most of the time when implementing a pathfinding algorithm such as A*, we seek to minimize the travel cost along the path. We could also seek to find the optimal path with the fewest number of turns. This could be done by, instead of having a grid of location states, having a grid of location-direction states. For any given location in the old grid, we would have 4 states in that spot representing that location moving left, right, up, or down. That is, if you were expanding to a node above you, you would actually be adding the 'up' state of that node to the priority queue, since we've found the quickest route to this node when going UP. If you were going that direction anyway, we wouldnt add anything to the weight. However, if we had to turn from the current node to get to the expanded node, we would add a small epsilon to the weight such that two shortest paths in distance would not be equal in cost if their number of turns differed. As long as epsilon is << cost of moving between nodes, its still the shortest path.
I now pose a similar problem, but with relaxed constraints. I no longer wish to find the shortest path, not even a path with the fewest turns. My only goal is to find a path of ANY length with numTurns <= n. To clarify, the goal of this algorithm would be to answer the question:
"Does there exist a path P from locations A to B such that there are fewer than or equal to n turns?"
I'm asking whether using some sort of greedy algorithm here would be helpful, since I do not require minimum distance nor turns. The problem is, if I'm NOT finding the minimum, the algorithm may search through more squares on the board. That is, normally a shortest path algorithm searches the least number of squares it has to, which is key for performance.
Are there any techniques that come to mind that would provide an efficient way (better or same as A*) to find such a path? Again, A* with fewest turns provides the "optimal" solution for distance and #turns. But for my problem, "optimal" is the fastest way the function can return whether there is a path of <=n turns between A and B. Note that there can be obstacles in the path, but other than that, moving from one square to another is the same cost (unless turning, as mentioned above).
I've been brainstorming, but I can not think of anything other than A* with the turn states . It might not be possible to do better than this, but I thought there may be a clever exploitation of my relaxed conditions. I've even considered using just numTurns as the cost of moving on the board, but that could waste a lot of time searching dead paths. Thanks very much!
Edit: Final clarification - Path does not have to have least number of turns, just <= n. Path does not have to be a shortest path, it can be a huge path if it only has n turns. The goal is for this function to execute quickly, I don't even need to record the path. I just need to know whether there exists one. Thanks :)
I am working with A-star algorithm, whereing I have a 2D grid and some obstacles. Now, I have only vertical and horizontal obstacles only, but they could vary densely.
Now, the A-star works well (i.e. shortest path found for most cases), but if I try to reach from the top left corner to the bottom right, then I see sometimes, the path is not shortest, i.e. there is some clumsiness in the path.
The path seem to deviate from the what the shortest should path should be.
Now here is what I am doing with my algorithm. I start from the source, and moving outward while calculating the value of the neighbours, for the distance from source + distance from destination, I keep choosing the minimum cell, and keep repeating until the cell I encounter is the destination, at which point I stop.
My question is, why is A-star not guaranteed to give me the shortest path. Or is it? and I am doing something wrong?
Thanks.
A-star is guaranteed to provide the shortest path according to your metric function (not necessarily 'as the bird flies'), provided that your heuristic is "admissible", meaning that it never over-estimates the remaining distance.
Check this link: http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html
In order to assist in determining your implementation error, we will need details on both your metric, and your heuristic.
Update:
OP's metric function is 10 for an orthogonal move, and 14 for a diagonal move.
OP's heuristic only considers orthogonal moves, and so is "inadmissible"; it overestimates by ignoring the cheaper diagonal moves.
The only cost to an overly conservative heuristic is that additional nodes are visited before finding the minimum path; the cost of an overly aggressive heuristic is a non-optimal path possibl e being returned. OP should use a heuristic of:
7 * (deltaX + deltaY)
which is a very slight underestimate on the possibility of a direct diagonal path, and so should also be performant.
Update #2:
To really squeeze out performance, this is close to an optimum while still being very fast:
7 * min(deltaX,deltaY) + 10 * ( max(deltaX,deltaY) -
min(deltaX,deltaY) )
Update #3:
The 7 above is derived from 14/2, where 14 is the diagonal cost in the metric.
Only your heuristic changes; the metric is "a business rule" and drives all the rest. If you are interested on A-star for a hexagonal grid, check out my project here: http://hexgridutilities.codeplex.com/
Update #4 (on performance):
My impression of A-star is that it staggers between regions of O(N^2) performance and areas of almost O(N) performance. But this is so dependent on the grid or graph, the obstacle placement, and the start and end points, that it is hard to generalize. For grids and graphs of known particular shapes or flavours there are a variety of more efficient algorithms, but they often get more complicated as well; TANSTAAFL.
I'm sure you are doing something wrong(Maybe some implementation flaw,your idea with A* sounds correct). A* guarantee gives the shortest path, it can be proved in math.
See this wiki pages will gives you all the information to solve your problem .
NO
A* is one of the fastest pathfinder algorithms but, it doesn't necessarily give the shortest path. If you are looking for correctness over time then it's best to use dijkstra's algorithm.
In a tower defense game, you have an NxM grid with a start, a finish, and a number of walls.
Enemies take the shortest path from start to finish without passing through any walls (they aren't usually constrained to the grid, but for simplicity's sake let's say they are. In either case, they can't move through diagonal "holes")
The problem (for this question at least) is to place up to K additional walls to maximize the path the enemies have to take. For example, for K=14
My intuition tells me this problem is NP-hard if (as I'm hoping to do) we generalize this to include waypoints that must be visited before moving to the finish, and possibly also without waypoints.
But, are there any decent heuristics out there for near-optimal solutions?
[Edit] I have posted a related question here.
I present a greedy approach and it's maybe close to the optimal (but I couldn't find approximation factor). Idea is simple, we should block the cells which are in critical places of the Maze. These places can help to measure the connectivity of maze. We can consider the vertex connectivity and we find minimum vertex cut which disconnects the start and final: (s,f). After that we remove some critical cells.
To turn it to the graph, take dual of maze. Find minimum (s,f) vertex cut on this graph. Then we examine each vertex in this cut. We remove a vertex its deletion increases the length of all s,f paths or if it is in the minimum length path from s to f. After eliminating a vertex, recursively repeat the above process for k time.
But there is an issue with this, this is when we remove a vertex which cuts any path from s to f. To prevent this we can weight cutting node as high as possible, means first compute minimum (s,f) cut, if cut result is just one node, make it weighted and set a high weight like n^3 to that vertex, now again compute the minimum s,f cut, single cutting vertex in previous calculation doesn't belong to new cut because of waiting.
But if there is just one path between s,f (after some iterations) we can't improve it. In this case we can use normal greedy algorithms like removing node from a one of a shortest path from s to f which doesn't belong to any cut. after that we can deal with minimum vertex cut.
The algorithm running time in each step is:
min-cut + path finding for all nodes in min-cut
O(min cut) + O(n^2)*O(number of nodes in min-cut)
And because number of nodes in min cut can not be greater than O(n^2) in very pessimistic situation the algorithm is O(kn^4), but normally it shouldn't take more than O(kn^3), because normally min-cut algorithm dominates path finding, also normally path finding doesn't takes O(n^2).
I guess the greedy choice is a good start point for simulated annealing type algorithms.
P.S: minimum vertex cut is similar to minimum edge cut, and similar approach like max-flow/min-cut can be applied on minimum vertex cut, just assume each vertex as two vertex, one Vi, one Vo, means input and outputs, also converting undirected graph to directed one is not hard.
it can be easily shown (proof let as an exercise to the reader) that it is enough to search for the solution so that every one of the K blockades is put on the current minimum-length route. Note that if there are multiple minimal-length routes then all of them have to be considered. The reason is that if you don't put any of the remaining blockades on the current minimum-length route then it does not change; hence you can put the first available blockade on it immediately during search. This speeds up even a brute-force search.
But there are more optimizations. You can also always decide that you put the next blockade so that it becomes the FIRST blockade on the current minimum-length route, i.e. you work so that if you place the blockade on the 10th square on the route, then you mark the squares 1..9 as "permanently open" until you backtrack. This saves again an exponential number of squares to search for during backtracking search.
You can then apply heuristics to cut down the search space or to reorder it, e.g. first try those blockade placements that increase the length of the current minimum-length route the most. You can then run the backtracking algorithm for a limited amount of real-time and pick the best solution found thus far.
I believe we can reduce the contained maximum manifold problem to boolean satisifiability and show NP-completeness through any dependency on this subproblem. Because of this, the algorithms spinning_plate provided are reasonable as heuristics, precomputing and machine learning is reasonable, and the trick becomes finding the best heuristic solution if we wish to blunder forward here.
Consider a board like the following:
..S........
#.#..#..###
...........
...........
..........F
This has many of the problems that cause greedy and gate-bound solutions to fail. If we look at that second row:
#.#..#..###
Our logic gates are, in 0-based 2D array ordered as [row][column]:
[1][4], [1][5], [1][6], [1][7], [1][8]
We can re-render this as an equation to satisfy the block:
if ([1][9] AND ([1][10] AND [1][11]) AND ([1][12] AND [1][13]):
traversal_cost = INFINITY; longest = False # Infinity does not qualify
Excepting infinity as an unsatisfiable case, we backtrack and rerender this as:
if ([1][14] AND ([1][15] AND [1][16]) AND [1][17]:
traversal_cost = 6; longest = True
And our hidden boolean relationship falls amongst all of these gates. You can also show that geometric proofs can't fractalize recursively, because we can always create a wall that's exactly N-1 width or height long, and this represents a critical part of the solution in all cases (therefore, divide and conquer won't help you).
Furthermore, because perturbations across different rows are significant:
..S........
#.#........
...#..#....
.......#..#
..........F
We can show that, without a complete set of computable geometric identities, the complete search space reduces itself to N-SAT.
By extension, we can also show that this is trivial to verify and non-polynomial to solve as the number of gates approaches infinity. Unsurprisingly, this is why tower defense games remain so fun for humans to play. Obviously, a more rigorous proof is desirable, but this is a skeletal start.
Do note that you can significantly reduce the n term in your n-choose-k relation. Because we can recursively show that each perturbation must lie on the critical path, and because the critical path is always computable in O(V+E) time (with a few optimizations to speed things up for each perturbation), you can significantly reduce your search space at a cost of a breadth-first search for each additional tower added to the board.
Because we may tolerably assume O(n^k) for a deterministic solution, a heuristical approach is reasonable. My advice thus falls somewhere between spinning_plate's answer and Soravux's, with an eye towards machine learning techniques applicable to the problem.
The 0th solution: Use a tolerable but suboptimal AI, in which spinning_plate provided two usable algorithms. Indeed, these approximate how many naive players approach the game, and this should be sufficient for simple play, albeit with a high degree of exploitability.
The 1st-order solution: Use a database. Given the problem formulation, you haven't quite demonstrated the need to compute the optimal solution on the fly. Therefore, if we relax the constraint of approaching a random board with no information, we can simply precompute the optimum for all K tolerable for each board. Obviously, this only works for a small number of boards: with V! potential board states for each configuration, we cannot tolerably precompute all optimums as V becomes very large.
The 2nd-order solution: Use a machine-learning step. Promote each step as you close a gap that results in a very high traversal cost, running until your algorithm converges or no more optimal solution can be found than greedy. A plethora of algorithms are applicable here, so I recommend chasing the classics and the literature for selecting the correct one that works within the constraints of your program.
The best heuristic may be a simple heat map generated by a locally state-aware, recursive depth-first traversal, sorting the results by most to least commonly traversed after the O(V^2) traversal. Proceeding through this output greedily identifies all bottlenecks, and doing so without making pathing impossible is entirely possible (checking this is O(V+E)).
Putting it all together, I'd try an intersection of these approaches, combining the heat map and critical path identities. I'd assume there's enough here to come up with a good, functional geometric proof that satisfies all of the constraints of the problem.
At the risk of stating the obvious, here's one algorithm
1) Find the shortest path
2) Test blocking everything node on that path and see which one results in the longest path
3) Repeat K times
Naively, this will take O(K*(V+ E log E)^2) but you could with some little work improve 2 by only recalculating partial paths.
As you mention, simply trying to break the path is difficult because if most breaks simply add a length of 1 (or 2), its hard to find the choke points that lead to big gains.
If you take the minimum vertex cut between the start and the end, you will find the choke points for the entire graph. One possible algorithm is this
1) Find the shortest path
2) Find the min-cut of the whole graph
3) Find the maximal contiguous node set that intersects one point on the path, block those.
4) Wash, rinse, repeat
3) is the big part and why this algorithm may perform badly, too. You could also try
the smallest node set that connects with other existing blocks.
finding all groupings of contiguous verticies in the vertex cut, testing each of them for the longest path a la the first algorithm
The last one is what might be most promising
If you find a min vertex cut on the whole graph, you're going to find the choke points for the whole graph.
Here is a thought. In your grid, group adjacent walls into islands and treat every island as a graph node. Distance between nodes is the minimal number of walls that is needed to connect them (to block the enemy).
In that case you can start maximizing the path length by blocking the most cheap arcs.
I have no idea if this would work, because you could make new islands using your points. but it could help work out where to put walls.
I suggest using a modified breadth first search with a K-length priority queue tracking the best K paths between each island.
i would, for every island of connected walls, pretend that it is a light. (a special light that can only send out horizontal and vertical rays of light)
Use ray-tracing to see which other islands the light can hit
say Island1 (i1) hits i2,i3,i4,i5 but doesn't hit i6,i7..
then you would have line(i1,i2), line(i1,i3), line(i1,i4) and line(i1,i5)
Mark the distance of all grid points to be infinity. Set the start point as 0.
Now use breadth first search from the start. Every grid point, mark the distance of that grid point to be the minimum distance of its neighbors.
But.. here is the catch..
every time you get to a grid-point that is on a line() between two islands, Instead of recording the distance as the minimum of its neighbors, you need to make it a priority queue of length K. And record the K shortest paths to that line() from any of the other line()s
This priority queque then stays the same until you get to the next line(), where it aggregates all priority ques going into that point.
You haven't showed the need for this algorithm to be realtime, but I may be wrong about this premice. You could then precalculate the block positions.
If you can do this beforehand and then simply make the AI build the maze rock by rock as if it was a kind of tree, you could use genetic algorithms to ease up your need for heuristics. You would need to load any kind of genetic algorithm framework, start with a population of non-movable blocks (your map) and randomly-placed movable blocks (blocks that the AI would place). Then, you evolve the population by making crossovers and transmutations over movable blocks and then evaluate the individuals by giving more reward to the longest path calculated. You would then simply have to write a resource efficient path-calculator without the need of having heuristics in your code. In your last generation of your evolution, you would take the highest-ranking individual, which would be your solution, thus your desired block pattern for this map.
Genetic algorithms are proven to take you, under ideal situation, to a local maxima (or minima) in reasonable time, which may be impossible to reach with analytic solutions on a sufficiently large data set (ie. big enough map in your situation).
You haven't stated the language in which you are going to develop this algorithm, so I can't propose frameworks that may perfectly suit your needs.
Note that if your map is dynamic, meaning that the map may change over tower defense iterations, you may want to avoid this technique since it may be too intensive to re-evolve an entire new population every wave.
I'm not at all an algorithms expert, but looking at the grid makes me wonder if Conway's game of life might somehow be useful for this. With a reasonable initial seed and well-chosen rules about birth and death of towers, you could try many seeds and subsequent generations thereof in a short period of time.
You already have a measure of fitness in the length of the creeps' path, so you could pick the best one accordingly. I don't know how well (if at all) it would approximate the best path, but it would be an interesting thing to use in a solution.