I tried solving the N puzzle problem using the A* algorithm with the heuristics manhattan distance and number of misplaced tiles. Even though the heuristic no. of misplaced tiles took considerably longer time, the number of moves given by the two methods was equal. Is the number of moves always the same with the two methods or can it be different?
If the heuristic is admissible (i.e. it never overestimates the number of moves still to do), then the solution found by the A* algorithm is guaranteed to be optimal.
The two heuristic functions you only briefly describe seem both to be admissible, so it is expected that in both cases you get the optimal number of moves.
Related
I am using A* in order to solve the Asymmetric Traveling Salesman problem.
My state representation has 4 variables:
1 - Visited cities (List)
2 - Unvisitied cities (List)
3 - Current City (Integer)
4 - Current Cost (Integer)
However, even tho I find many path-construction algorithms such as Nearest Neighbor, k-opt and so on, I can't find an heuristic suitable for A*, which is, a h(n) function that takes a state as input and returns an integer corresponding to that state's quality.
So my question is, are there such heuristics? Any recommendations?
Thanks in advance
The weight of the minimum spanning tree of the subgraph that contains all unvisited vertices and the current vertex is a lower bound for the cost to finish the current path. It can be used with the A* algorithm as it can't overestimate the remaining distance (otherwise, the weight of the remaining path is smaller than the weight of minimum spanning tree and it spans the given vertices, which is a contradiction).
I've never tried it though so I don't know how well it'll work in practice.
There always are: h(n) = 0 always works. It is useless, turning A* into Dijkstra, but it's definitely admissible.
An other obvious one: let h(n) be the shortest edge from the current city back to the beginning. Still a huge underestimation, but at least it's not necessarily zero. It's obviously valid, the loop has to be closed eventually and (given this partial route) there is no shorter way to do it.
You can be a bit more clever here, for example you could use linear programming (make two variables for each edge, one for each direction, then for every city make a constraint forcing the sum of entering edges to be 1 and a constraint forcing the sum of exiting edges to be one, weights are obviously the distances) to find an underestimation of the length from the current node back to the beginning while touching every city in the set of unvisited cities. Of course if you're doing that, you might as well drop A* and just use the usual integer linear programming tricks. A* doesn't seem like a good fit here (especially in the beginning, the branching factor is too high and the heuristics won't guide it enough yet), but I haven't tried this so who knows.
Also, given the solution from the LP, you can improve it a lot by using some simple tricks (and some advanced tricks that whole books have been written about, but let's not go there, read the books if you want to know). For example, one thing the LP likes to do is form lots of little triangles. This will satisfy the degree constraints everywhere locally and keeps everything nice and short. But it's not a tour, and forcing it be more like a tour will make the heuristic higher=better. To remove the sub-tours, you can detect them in the fractional solution and then force the number of entries to the subgraph to be at least 1 (it may have to become more than 1 at some point, so don't force it to be exactly 1) and force the number of exits to be at least 1, by adding the corresponding constraints and solving again. There are many more tricks, but this should already give a very reasonable heuristic, much closer to the actual cost than using any of the overestimating heuristics and dividing them by their worst case overestimation factor. The problem with those is that usually the heuristic is pretty good, much better than their worst case factor, and then dividing by the worst case factor really kills the quality of the heuristic.
Say I have a grid with some squares designated as "goal" squares. I am using A* in order to navigate this grid, trying to visit every goal square at least once using non-diagonal movement. Once a goal square has been visited, it is no longer considered a goal square. Think Pac Man, moving around and trying to eat all the dots.
I am looking for a consistent heuristic to give A* to aid in navigation. I decided to try a "return the Manhattan Distance to the nearest unvisited goal" heuristic for any given location. I have been told that this is not a consistent heuristic but I do not understand why.
Moving one square towards the closest goal square has a cost of one, and the Manhattan Distance should also be reduced by one. Landing on a goal square will either increase the value of the heuristic (because it will now seek the next nearest unvisited goal) or end the search (if the goal was the last unvisited goal)
H(N) < c(N,P) + h(P) seems to always hold true. What is it that makes this algorithm inconsistent, or is my instructor mistaken?
If you are asking how to use A* to find the shortest path through all the goals, the answer is: you can't (with only one iteration). This is the Travelling Salesman Problem, an NP-Complete problem. To solve this using A*, you'd need to try every permutation of goal-orderings. Each path from a single-start to a single-goal could then be solved using A* (so you'd need to run the algorithm multiple times for each permutation).
However, if you are asking how to use A* to find the shortest path from a single start to any one of a number of goals, your solution works fine, and your heuristic is indeed consistent. The minimum of multiple consistent-heuristics is still a consistent-heuristic, which is easy to prove.
I have implemented an A* algorithm for solving the 15-puzzle. I made a research for finding some viable or admissible heuristics, looking for a fast solution, and i find that using 4*Manhattan Distance as heuristic always solve any 15-puzzle in less than a second. I tried this and effectively works. I tried to find a answer for that but i cant find it.
Any one can explain this?
4* manhattan distance is not admissible heuristic, this makes the algorithm behave "closer" to greedy best first (where the algorithm chooses which node to develop solely based on the heuristic function). This makes the algorithm sometimes prefer depth of solutions and exploration over breadth and optimality.
The idea is similar to what happens in A*-Epsilon, where you allow A* to develop none optimal nodes up to a certain bound in order to speed up your algorithm, Actually - I suspect you will get the same (or similar results) if you run A*-Epsilon with Manhattan distance and epsilon = 3. (If I am correct, this makes the solution you find in the modified heuristic bounded by 4*OPTIMAL, where OPTIMAL is the length of the optimal path)
I have a set of N points in a D-dimensional metric space. I want to select K of them in such a way that the smallest distance between any two points in the subset is the largest.
For instance, with N=4 and K=3 in 3D Euclidean space, the solution is the face of the tetrahedron having the longest short side.
Is there a classical way to achieve that ? Can it be solved exactly in polynomial time ?
I have googled as much as I could, but I have not figured out yet how to call this problem.
In my case N=50, K=10 and D=300 typically.
Clarification:
A brute force approach would be to try every combination of K points among the N and determine the closest pair in every subset. The solution is given by the subset that yields the longest pair.
Done the trivial way, an O(K^2) process, to be repeated N! / K!(N-K)! times.
Hum, 10^2 50! / 10! 40! = 1027227817000
I think you might find papers on unit disk graphs informative but discouraging. For instance, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.3113&rep=rep1&type=pdf states that the maximum independent set problem on unit disk graphs in NP-complete, even if the disk representation is known. A unit disk graph is the graph you get by placing points in the plane and forming links between every pair of points at most a unit distance apart.
So I think that if you could solve your problem in polynomial time you could run it on a unit disk graph for different values of K until you find a value at which the smallest distance between two chosen points was just over one, and I think this would be a maximum independent set on the unit disk graph, which would be solving an NP-complete problem in polynomial time.
(Just about to jump on a bicycle so this is a bit rushed, but searching for papers on unit disk graphs might at least turn up some useful search terms)
Here's an attempt to explain it piece by piece:
Here is another attempt to relate the two problems.
For maximum independent set see http://en.wikipedia.org/wiki/Maximum_independent_set#Finding_maximum_independent_sets. A decision problem version of this is "Are there K vertices in this graph such that no two are joined by an edge?" If you can solve this you can certainly find a maximum independent set by finding the largest K by asking this question for different K and then finding the K nodes by asking the question on versions of the graph with one or more nodes deleted.
I state without proof that finding the maximum independent set in a unit disk graph is NP-complete. Another reference for this is http://web.sau.edu/lilliskevinm/wirelessbib/ClarkColbournJohnson.pdf.
A decision version of your problem is "Do there exist K points with distance at least D between any two points?" Again, you can solve this in polynomial time iff you can solve your original problem in polynomial time - play around until you find the largest D that gives answer yes, and then delete points and see what happens.
A unit disk graph has an edge exactly when the distance between two points is 1 or less. So if you could solve the decision version of your original problem you could solve the decision version of the unit disk graph problem just by setting D = 1 and solving your problem.
So I think I have constructed a series of links showing that if you could solve your problem you could solve an NP-complete problem by turning it into your problem, which causes me to think that your problem is hard.
I have a problem which I like and I love to think about solutions, but I'm stuck unfortunately. I hope you like it too. The problem states:
I have two lists of 2D points(say A and B) and need to pair up points from A with points from B, under the condition that the sum of the distances in all pairs is minimal. A pair contains one point from A and one from B, a point can be used only once, and as many as possible pairs should be created(i.e. min(length(A), length(B))).
I've made a simple example, where color denotes which list the point is from, and the black connections are the solution.
Although this is a nice problem and I suspect is NP-hard, it gets nicer. I can build on existing solutions. Suppose I have two lists and the corresponding solution(i.e. the set of pairs), then the problem I need to solve is to reoptimalize that solution when a point is added to or removed from either list.
I've unfortunately not been able to come up with any non-brute force algorithm yielding the optimal solution. I hope you can. Any algorithm is appreciated in any (pseudo) language, preferably C#.
This problem is solvable in polynomial time via the Hungarian algorithm. To get a square matrix, add dummy entries to the shorter list at "distance 0" from everything.
Your problem is an instance of the weighted minimum maximal matching problem (as described in this Wikipedia article). There is no polynomial-time algorithm even for the unweighted problem (all distances equal). There are efficient algorithms to approximately solve it in polynomial time (within a factor of 2).
This is the minimum weight Euclidean bipartite matching problem. There is a O(n^(2+epsilon)) algorithm.