Travelling Salesman Constructive Heuristics - algorithm

Say, we have a circular list representing a solution of the traveling salesman problem. This list is initially empty.
If the user is allowed to enter a city and it's coordinate one by one, what heuristics could be used to insert those coordinates into the already existing tour?
An example uses the nearest neighbor heuristic : it inserts the new coordinate after the nearest coordinate already in the tour.
What are some other options (pseudo-code if possible).

There are plenty of construction heuristics you can use, such as First Fit, First Fit Decreasing, Best Fit, Best Fit Decreasing and Cheapest Insertion.
Those constructions heuristics are applied on bin packing normally, but they can be converted to TSP too. Documentation about those heuristics is here.
Since you're only inserting 1 unassigned entity at at time, all of these basically revert to what you call nearest neighbor heuristic (with a slight variation on ties), but note that that is not what they usually call Nearest Neighbor. Nearest Neighbor always adds to the end of the line, the nearest neighbor of all unassigned entities.
Now, what you really want, is a decent solution, without having to restart your entire construction heuristics. That's harder: welcome to repeated planning and real-time planning (and this documentation). I am working on a open source example for TSP and vehicle routing that does real-time planning.

You can of course generalize the idea you have mentioned:
Define k'th_path(v) = minimum weight of a path including max{k,not_visited cities} cities
Note that calculating the k'th path is O(|V|^k) [this bound is not tight]
Special cases:
For k=1 you get the nearest neighbor, as you suggested.
for k=|V| you get an optimal solution [note it will be very expansive to calculate].

There are not other heuristic because TSP is always about to find the nearest coordinate. At least I don't know an algorithm that can insert a coordinate and knows the nearest coordinate but there are plenty algorithm to find a good tour. A good heuristic is for example the Christofides algorithm, it works only in euklidian space but it give you a guarantee of the solution to be within 3/2 of the optimum. It's not very easy to code. Especially the edmond blossom v algorithm is for an expert skill. The importance of a guarantee isn't high enough because how would you explain that your method can deliver non-sense in some rare situation?

Related

A* - Graph Traversal Heuristic

I have a graph that represents a city. I know the location of places of interest (nodes, which have a Importance value), the location of the hotel I'm staying in, how the nodes are connected, the traversal time between them and have acess to latitude and longitude. There are no issues converting from time to distance and vice-versa.
The objective is to tour the city, maximizing the importance per day but limiting one day of travel to 10 hours. A day begins and ends at the hotel. I have a working A* algorithm that chooses the lowest value but with no heuristic yet, which I guess makes it a BB for now. With that in mind:
Since I have access to Lat/Long, my first stab at an heuristic, while
only dealing with times, would be the distance as the crow flies
between a node and the hotel. Would this be an admissible heuristic?
It gives me the shortest possible distance and time, so it wouldn't
overestimate.
Now let's say the Importance of a node is between 1-4. In order to factor it in, one idea could be g(neighbor) = g(current) + (edge_cost / Importance^2). Assuming this would be valid (if not, why?):
But now the heuristic values would be in a different unit. Could a solution to this simply be give the Hotel Importance = 1? If the value is the same, will it still be admissible? EDIT: I think this will end up giving me problems because of the difference in scale.
I still have to restrict the total amount of time. Should each node keep track of the total time spent, in order to compare to the limit, plus the g() and h() values, because of the different units?
And finally:
Since I have to start and end in the same node, what comes to mind is to explore a node and should I find the hotel see if I still have time to explore the neighbors instead of going back. However, if I still have time to expand to one more node, but time runs out and I can't get to the hotel from there, I'm assuming I'll have to backtrack to the parent.
I can't help but see similarities to the knapsack problem. Even though I have to use A*, is there any lesson I can take from it?
Must my heuristic be consistent in this case? If so, why?
By the way, the purpose here is pathfinding first, optimizations second.
This actually looks like a combination of the travelling salesman problem (TSP) and knapsack problem (KP). It's KP in this respect: the knapsack capacity is 10 (for total hours available in a day) and the locations are the items. The item value equals the location value. The item weight is equal to the time it takes to travel to the location (plus the location's portion of the trip back to the hotel). The challenge arises from the fact that an item's weight is unknown until you solve the optimal tour through the selected locations--enter the TSP and Pathfinding.
One approach might be to use a pathfinding algorithm (e.g. A*, Bellman–Ford, or Dijkstra's algorithm) primarily to compute a distance matrix between each node. The distance matrix can then be leveraged while solving the TSP portion of the problem: finding a tour through the locations and using the total time as the weight.
The next step is up to you. If you are looking for an approximate solution, many heuristics exist for both TSP and KP: See Christofides TSP Heuristic, or the Minimum TSP and Maximum Knapsack entries at the Compendium of NP Optimization problems.
If on the other hand you seek an optimal solution, you may be out of luck. Still I recommend you find a copy of Graph Theory. An Algorithmic Approach by Nicos Christofides (ISBN-13: 978-0121743505). It provides heuristics for early backtracking in a Depth-First-Search that expedite the search for optimal solutions to several NP-Complete problems.

Is A* really better than Dijkstra in real-world path finding?

I'm developing a path finding program. It is said theoretically that A* is better than Dijkstra. In fact, the latter is a special case of the former. However, when testing in the real world, I begin to doubt that is A* really better?
I used data of New York City, from 9th DIMACS Implementation Challenge - Shortest Paths, in which each node's latitude and longitude is given.
When applying A*, I need to calculate the spherical distance between two points, using Haversine Formula, which involves sin, cos, arcsin, square root. All of those are very very time-consuming.
The result is,
Using Dijkstra: 39.953 ms, expanded 256540 nodes.
Using A*, 108.475 ms, expanded 255135 nodes.
Noticing that in A*, we expanded less 1405 nodes. However, the time to compute a heuristic is much more than that saved.
To my understanding, the reason is that in a very large real graph, the weight of the heuristic will be very small, and the effect of it can be ignored, while the computing time is dominating.
I think you're more or less missing the point of A*. It is intended to be a very performant algorithm, partially by intentionally doing more work but with cheap heuristics, and you're kind of tearing that to bits when burdening it with a heavy extremely accurate prediction heuristic.
For node selection in A* you should just use an approximation of distance. Simply using (latdiff^2)+(lngdiff^2) as the approximated distance should make your algorithm much more performant than Dijkstra, and not give much worse results in any real world scenario. Actually the results should even be exactly the same if you do calculate the travelled distance on a selected node properly with the Haversine. Just use a cheap algorithm for selecting potential next traversals.
A* can be reduced to Dijkstra by setting some trivial parameters. There are three possible ways in which it does not improve on Dijkstra:
The heuristic used is incorrect: it is not a so-called admissible heuristic. A* should use a heuristic which does not overestimate the distance to the goal as part of its cost function.
The heuristic is too expensive to calculate.
The real-world graph structure is special.
In the case of the latter you should try to build on existing research, e.g. "Highway Dimension, Shortest Paths, and Provably Efficient Algorithms" by Abraham et al.
Like everything else in the universe, there's a trade-off. You can take dijkstra's algorithm to precisely calculate the heuristic, but that would defeat the purpose wouldn't it?
A* is a great algorithm in that it makes you lean towards the goal having a general idea of which direction to expand first. That said, you should keep the heuristic as simple as possible because all you need is a general direction.
In fact, more precise geometric calculations that are not based on the actual data do not necessarily give you a better direction. As long as they are not based on the data, all those heuristics give you just a direction which are all equally (in)correct.
In general A* is more performant than Dijkstra's but it really depends the heuristic function you use in A*. You'll want an h(n) that's optimistic and finds the lowest cost path, h(n) should be less than the true cost. If h(n) >= cost, then you'll end up in a situation like the one you've described.
Could you post your heuristic function?
Also bear in mind that the performance of these algorithms highly depends on the nature of the graph, which is closely related to the accuracy of the heuristic.
If you compare Dijkstra vs A* when navigating out of a labyrinth where each passage corresponds to a single graph edge, there is probably very little difference in performance. On the other hand, if the graph has many edges between "far-away" nodes, the difference can be quite dramatic. Think of a robot (or an AI-controlled computer game character) navigating in almost open terrain with a few obstacles.
My point is that, even though the New York dataset you used is definitely a good example of "real-world" graph, it is not representative of all real-world path finding problems.

Modifying A star routing for a euclidean graph with edge weights as well as distances

I'm in the process of writing an application to suggest circular routes over OpenStreetMap data subject to some constraints (the Orienteering Problem). In the innermost loop of the algorithm I'm trialling is a requirement to find the lowest cost path between two given points. Given the layout of the graph (basically Euclidean), the A star algorithm seems to be likely to produce results in the fastest time given the graph. However as well as distances on my edges (representing actual distances on the map), I also have a series of weights (currently scaled from 0.0, least desirable to 1.0, most desirable) indicating how desirable the particular edge (road/path/etc) is, calculated according to some metrics I've devised for my application.
I would like to modify my distances based on these weights. I am aware that the standard A star heuristic relies on the true cost of the path being at least as great as the estimate (based on a euclidean distance between the points). So my first thought was to come up with a scheme where the minimum edge distance is the real distance (for weight 1.0) and the distance is increased as the weight decreases (for instance quadrupling the distance for weight 0.0). Does this seem a sensible approach, or is there a better standard technique for fast routing under these circumstances?
I believe your approach is the most sane. Apparently I'm working on a similar problem, and I decided to use exactly the same strategy.
The A* algorithm doesn't necessarily rely on "true distances". It's not even about distances, you actually may minimize other physical quantity - the heuristic function should have the same physical units.
For instance, my problem is to minimize the path time, whereas the velocity at any given point depends on the location, time, and the chosen direction. My heuristic function is the rough distance (my problem is on the Earth surface, calculating the great circle distance is somewhat pricey) divided by the maximum allowed velocity. That is, it has units of time, and it's interpreted as the most optimistic time to reach the finishing point from the given location.
The relevant question is: "what do you actually want to minimize?". You need to end up with a single "modified distance", so that your pathfinding algorithm can pick the smallest one.
The continued usefulness of the A* algorithm depends on exactly how you integrate "desirability" into your routing distance. A* requires an "admissible heuristic" which is optimistic: the heuristic estimate for your "modified distance" must not exceed the actual "modified distance" (otherwise, the path it finds may not actually be optimal...). One way to ensure that is to make sure that the modified distance is always larger than the original, Euclidean distance for any given step; then, any A* heuristic admissible to minimize the Euclidean distance will also be admissible to minimize the modified distance.
For example, if you compute modified_distance = euclidean_distance / desirability_rating (for 0<desirability_rating<=1), your modified_distance will never be smaller than the euclidean_distance: whatever A* heuristic you were using for your unweighted paths will still be optimistic, so it will be usable for A*. (although, in regions where every path is undesirable, a highly over-optimistic A* heuristic may not improve performance as much as you would like...)
Fast(er) routing can be done with A*. In my own project I see a boost of approx. 4 times faster compared to dijkstra - so, even faster then bidirectional dijkstra.
But there are a lot more technics to improve query speed - e.g. introducing shortcuts and running A* on that graph. Here is a more detailed answer.

Approximation algorithm for TSP variant, fixed start and end anywhere but starting point + multiple visits at each vertex ALLOWED

NOTE: Due to the fact that the trip does not end at the same place it started and also the fact that every point can be visited more than once as long as I still visit all of them, this is not really a TSP variant, but I put it due to lack of a better definition of the problem.
So..
Suppose I am going on a hiking trip with n points of interest. These points are all connected by hiking trails. I have a map showing all trails with their distances, giving me a directed graph.
My problem is how to approximate a tour that starts at a point A and visits all n points of interest, while ending the tour anywhere but the point where I started and I want the tour to be as short as possible.
Due to the nature of hiking, I figured this would sadly not be a symmetric problem (or can I convert my asymmetric graph to a symmetric one?), since going from high to low altitude is obviously easier than the other way around.
Also I believe it has to be an algorithm that works for non-metric graphs, where the triangle inequality is not satisfied, since going from a to b to c might be faster than taking a really long and weird road that goes from a to c directly. I did consider if triangle inequality still holds, since there are no restrictions regarding how many times I visit each point, as long as I visit all of them, meaning I would always choose the shortest of two distinct paths from a to c and thus never takr the long and weird road.
I believe my problem is easier than TSP, so those algorithms do not fit this problem. I thought about using a minimum spanning tree, but I have a hard time convincing myself that they can be applied to a non-metric asymmetric directed graph.
What I really want are some pointers as to how I can come up with an approximation algorithm that will find a near optimal tour through all n points
To reduce your problem to asymmetric TSP, introduce a new node u and make arcs of length L from u to A and from all nodes but A to u, where L is very large (large enough that no optimal solution revisits u). Delete u from the tour to obtain a path from A to some other node via all others. Unfortunately this reduction preserves the objective only additively, which make the approximation guarantees worse by a constant factor.
The target of the reduction Evgeny pointed out is non-metric symmetric TSP, so that reduction is not useful to you, because the approximations known all require metric instances. Assuming that the collection of trails forms a planar graph (or is close to it), there is a constant-factor approximation due to Gharan and Saberi, which may unfortunately be rather difficult to implement, and may not give reasonable results in practice. Frieze, Galbiati, and Maffioli give a simple log-factor approximation for general graphs.
If there are a reasonable number of trails, branch and bound might be able to give you an optimal solution. Both G&S and branch and bound require solving the Held-Karp linear program for ATSP, which may be useful in itself for evaluating other approaches. For many symmetric TSP instances that arise in practice, it gives a lower bound on the cost of an optimal solution within 10% of the true value.
You can simplify this problem to a normal TSP problem with n+1 vertexes. To do this, take node 'A' and all the points of interest and compute a shortest path between each pair of these points. You can use the all-pairs shortest path algorithm on the original graph. Or, if n is significantly smaller than the original graph size, use single-source shortest path algorithm for these n+1 vertexes. Also you can set length of all the paths, ending at 'A', to some constant, larger than any other path, which allows to end the trip anywhere (this may be needed only for TSP algorithms, finding a round-trip path).
As a result, you get a complete graph, which is metric, but still asymmetric. All you need now is to solve a normal TSP problem on this graph. If you want to convert this asymmetric graph to a symmetric one, Wikipedia explains how to do it.

Generating a tower defense maze (longest maze with limited walls) - near-optimal heuristic?

In a tower defense game, you have an NxM grid with a start, a finish, and a number of walls.
Enemies take the shortest path from start to finish without passing through any walls (they aren't usually constrained to the grid, but for simplicity's sake let's say they are. In either case, they can't move through diagonal "holes")
The problem (for this question at least) is to place up to K additional walls to maximize the path the enemies have to take. For example, for K=14
My intuition tells me this problem is NP-hard if (as I'm hoping to do) we generalize this to include waypoints that must be visited before moving to the finish, and possibly also without waypoints.
But, are there any decent heuristics out there for near-optimal solutions?
[Edit] I have posted a related question here.
I present a greedy approach and it's maybe close to the optimal (but I couldn't find approximation factor). Idea is simple, we should block the cells which are in critical places of the Maze. These places can help to measure the connectivity of maze. We can consider the vertex connectivity and we find minimum vertex cut which disconnects the start and final: (s,f). After that we remove some critical cells.
To turn it to the graph, take dual of maze. Find minimum (s,f) vertex cut on this graph. Then we examine each vertex in this cut. We remove a vertex its deletion increases the length of all s,f paths or if it is in the minimum length path from s to f. After eliminating a vertex, recursively repeat the above process for k time.
But there is an issue with this, this is when we remove a vertex which cuts any path from s to f. To prevent this we can weight cutting node as high as possible, means first compute minimum (s,f) cut, if cut result is just one node, make it weighted and set a high weight like n^3 to that vertex, now again compute the minimum s,f cut, single cutting vertex in previous calculation doesn't belong to new cut because of waiting.
But if there is just one path between s,f (after some iterations) we can't improve it. In this case we can use normal greedy algorithms like removing node from a one of a shortest path from s to f which doesn't belong to any cut. after that we can deal with minimum vertex cut.
The algorithm running time in each step is:
min-cut + path finding for all nodes in min-cut
O(min cut) + O(n^2)*O(number of nodes in min-cut)
And because number of nodes in min cut can not be greater than O(n^2) in very pessimistic situation the algorithm is O(kn^4), but normally it shouldn't take more than O(kn^3), because normally min-cut algorithm dominates path finding, also normally path finding doesn't takes O(n^2).
I guess the greedy choice is a good start point for simulated annealing type algorithms.
P.S: minimum vertex cut is similar to minimum edge cut, and similar approach like max-flow/min-cut can be applied on minimum vertex cut, just assume each vertex as two vertex, one Vi, one Vo, means input and outputs, also converting undirected graph to directed one is not hard.
it can be easily shown (proof let as an exercise to the reader) that it is enough to search for the solution so that every one of the K blockades is put on the current minimum-length route. Note that if there are multiple minimal-length routes then all of them have to be considered. The reason is that if you don't put any of the remaining blockades on the current minimum-length route then it does not change; hence you can put the first available blockade on it immediately during search. This speeds up even a brute-force search.
But there are more optimizations. You can also always decide that you put the next blockade so that it becomes the FIRST blockade on the current minimum-length route, i.e. you work so that if you place the blockade on the 10th square on the route, then you mark the squares 1..9 as "permanently open" until you backtrack. This saves again an exponential number of squares to search for during backtracking search.
You can then apply heuristics to cut down the search space or to reorder it, e.g. first try those blockade placements that increase the length of the current minimum-length route the most. You can then run the backtracking algorithm for a limited amount of real-time and pick the best solution found thus far.
I believe we can reduce the contained maximum manifold problem to boolean satisifiability and show NP-completeness through any dependency on this subproblem. Because of this, the algorithms spinning_plate provided are reasonable as heuristics, precomputing and machine learning is reasonable, and the trick becomes finding the best heuristic solution if we wish to blunder forward here.
Consider a board like the following:
..S........
#.#..#..###
...........
...........
..........F
This has many of the problems that cause greedy and gate-bound solutions to fail. If we look at that second row:
#.#..#..###
Our logic gates are, in 0-based 2D array ordered as [row][column]:
[1][4], [1][5], [1][6], [1][7], [1][8]
We can re-render this as an equation to satisfy the block:
if ([1][9] AND ([1][10] AND [1][11]) AND ([1][12] AND [1][13]):
traversal_cost = INFINITY; longest = False # Infinity does not qualify
Excepting infinity as an unsatisfiable case, we backtrack and rerender this as:
if ([1][14] AND ([1][15] AND [1][16]) AND [1][17]:
traversal_cost = 6; longest = True
And our hidden boolean relationship falls amongst all of these gates. You can also show that geometric proofs can't fractalize recursively, because we can always create a wall that's exactly N-1 width or height long, and this represents a critical part of the solution in all cases (therefore, divide and conquer won't help you).
Furthermore, because perturbations across different rows are significant:
..S........
#.#........
...#..#....
.......#..#
..........F
We can show that, without a complete set of computable geometric identities, the complete search space reduces itself to N-SAT.
By extension, we can also show that this is trivial to verify and non-polynomial to solve as the number of gates approaches infinity. Unsurprisingly, this is why tower defense games remain so fun for humans to play. Obviously, a more rigorous proof is desirable, but this is a skeletal start.
Do note that you can significantly reduce the n term in your n-choose-k relation. Because we can recursively show that each perturbation must lie on the critical path, and because the critical path is always computable in O(V+E) time (with a few optimizations to speed things up for each perturbation), you can significantly reduce your search space at a cost of a breadth-first search for each additional tower added to the board.
Because we may tolerably assume O(n^k) for a deterministic solution, a heuristical approach is reasonable. My advice thus falls somewhere between spinning_plate's answer and Soravux's, with an eye towards machine learning techniques applicable to the problem.
The 0th solution: Use a tolerable but suboptimal AI, in which spinning_plate provided two usable algorithms. Indeed, these approximate how many naive players approach the game, and this should be sufficient for simple play, albeit with a high degree of exploitability.
The 1st-order solution: Use a database. Given the problem formulation, you haven't quite demonstrated the need to compute the optimal solution on the fly. Therefore, if we relax the constraint of approaching a random board with no information, we can simply precompute the optimum for all K tolerable for each board. Obviously, this only works for a small number of boards: with V! potential board states for each configuration, we cannot tolerably precompute all optimums as V becomes very large.
The 2nd-order solution: Use a machine-learning step. Promote each step as you close a gap that results in a very high traversal cost, running until your algorithm converges or no more optimal solution can be found than greedy. A plethora of algorithms are applicable here, so I recommend chasing the classics and the literature for selecting the correct one that works within the constraints of your program.
The best heuristic may be a simple heat map generated by a locally state-aware, recursive depth-first traversal, sorting the results by most to least commonly traversed after the O(V^2) traversal. Proceeding through this output greedily identifies all bottlenecks, and doing so without making pathing impossible is entirely possible (checking this is O(V+E)).
Putting it all together, I'd try an intersection of these approaches, combining the heat map and critical path identities. I'd assume there's enough here to come up with a good, functional geometric proof that satisfies all of the constraints of the problem.
At the risk of stating the obvious, here's one algorithm
1) Find the shortest path
2) Test blocking everything node on that path and see which one results in the longest path
3) Repeat K times
Naively, this will take O(K*(V+ E log E)^2) but you could with some little work improve 2 by only recalculating partial paths.
As you mention, simply trying to break the path is difficult because if most breaks simply add a length of 1 (or 2), its hard to find the choke points that lead to big gains.
If you take the minimum vertex cut between the start and the end, you will find the choke points for the entire graph. One possible algorithm is this
1) Find the shortest path
2) Find the min-cut of the whole graph
3) Find the maximal contiguous node set that intersects one point on the path, block those.
4) Wash, rinse, repeat
3) is the big part and why this algorithm may perform badly, too. You could also try
the smallest node set that connects with other existing blocks.
finding all groupings of contiguous verticies in the vertex cut, testing each of them for the longest path a la the first algorithm
The last one is what might be most promising
If you find a min vertex cut on the whole graph, you're going to find the choke points for the whole graph.
Here is a thought. In your grid, group adjacent walls into islands and treat every island as a graph node. Distance between nodes is the minimal number of walls that is needed to connect them (to block the enemy).
In that case you can start maximizing the path length by blocking the most cheap arcs.
I have no idea if this would work, because you could make new islands using your points. but it could help work out where to put walls.
I suggest using a modified breadth first search with a K-length priority queue tracking the best K paths between each island.
i would, for every island of connected walls, pretend that it is a light. (a special light that can only send out horizontal and vertical rays of light)
Use ray-tracing to see which other islands the light can hit
say Island1 (i1) hits i2,i3,i4,i5 but doesn't hit i6,i7..
then you would have line(i1,i2), line(i1,i3), line(i1,i4) and line(i1,i5)
Mark the distance of all grid points to be infinity. Set the start point as 0.
Now use breadth first search from the start. Every grid point, mark the distance of that grid point to be the minimum distance of its neighbors.
But.. here is the catch..
every time you get to a grid-point that is on a line() between two islands, Instead of recording the distance as the minimum of its neighbors, you need to make it a priority queue of length K. And record the K shortest paths to that line() from any of the other line()s
This priority queque then stays the same until you get to the next line(), where it aggregates all priority ques going into that point.
You haven't showed the need for this algorithm to be realtime, but I may be wrong about this premice. You could then precalculate the block positions.
If you can do this beforehand and then simply make the AI build the maze rock by rock as if it was a kind of tree, you could use genetic algorithms to ease up your need for heuristics. You would need to load any kind of genetic algorithm framework, start with a population of non-movable blocks (your map) and randomly-placed movable blocks (blocks that the AI would place). Then, you evolve the population by making crossovers and transmutations over movable blocks and then evaluate the individuals by giving more reward to the longest path calculated. You would then simply have to write a resource efficient path-calculator without the need of having heuristics in your code. In your last generation of your evolution, you would take the highest-ranking individual, which would be your solution, thus your desired block pattern for this map.
Genetic algorithms are proven to take you, under ideal situation, to a local maxima (or minima) in reasonable time, which may be impossible to reach with analytic solutions on a sufficiently large data set (ie. big enough map in your situation).
You haven't stated the language in which you are going to develop this algorithm, so I can't propose frameworks that may perfectly suit your needs.
Note that if your map is dynamic, meaning that the map may change over tower defense iterations, you may want to avoid this technique since it may be too intensive to re-evolve an entire new population every wave.
I'm not at all an algorithms expert, but looking at the grid makes me wonder if Conway's game of life might somehow be useful for this. With a reasonable initial seed and well-chosen rules about birth and death of towers, you could try many seeds and subsequent generations thereof in a short period of time.
You already have a measure of fitness in the length of the creeps' path, so you could pick the best one accordingly. I don't know how well (if at all) it would approximate the best path, but it would be an interesting thing to use in a solution.

Resources