Manhattan distance generalization - algorithm

For a research I'm working on I'm trying to find a satisfying heuristic that is based on Manhattan distance which can work with any problem and domain as an input. Which is also known as domain-independent heuristic.
For now, I know how to apply Manhattan distance on a grid based problems.
Can someone give a tip how to generalize it to work on every domain and problem and not just grid based problems?

The generalization of Manhattan distance is simple. It is a metric which defines the distance between two multi-dimensional points as the sum of the distances along each dimension:
md(A, B) = dist(a1, b1) + dist(a2, b2) + . . .
The distances along each dimension are assumed to be simple to calculate. For numbers, the distance is the absolute value of the difference between the values.
This can be extended to other domains as well. For instance, the distance between two strings could be taken as the Levenshtein distance -- and that would prove to be an interesting metric in conjunction with other dimensions.

The manhattan distance heuristic is an attempt to measure the minimum number of steps required to find a path to the goal state. The closer you get to the actual number of steps, the fewer nodes have to be expanded during search, where at the extreme with a perfect heuristic, you only expand nodes that are guaranteed to be on the goal path.
For a more academic approach to generalizing this idea, you want to search around for domain independent heuristics; there was a lot of research done on this in the late 1990s early 2000s although even today, a small amount of domain knowledge can usually get you much better results. That being said, there are some good places to start:
delete relaxation: the expand function probably contains some restrictions, remove one or more of those restrictions and you'll end up with a much easier problem, one that can probably be solved in real time and you'll and use the value generated by that relaxed problem as the heuristic value. e.g. in the sliding tile puzzle, delete the constraint that a piece cannot move on top of other pieces and you end up with the manhattan distance, relax that a piece can only move to adjacent squares and you end up with the hamming distance heuristic.
abstraction: mapping every state in the real search to a smaller abstract state space that you can fully evaluate. Pattern databases are a very popular tool in this area.
critical paths: when you know you must pass through specific states (in either the real state space or an abstract state space) you can perform multiple searches between only the critical points to cut down greatly the number of nodes you would have to search in the full state space
landmarks: very accurate heuristics at the cost of typically high computation time. landmarks are specific locations in which you precompute the distance to every possible other state from (typically 5-25 landmarks are used depending on graph size) and then you compute the lower bound possible distance using those precomputed values when evaluating each node.
There are a few other classes of domain independent heuristics, but these are the most popular and widely used in classical planning applications.

Related

Manhattan, Euclidian and Chebyshev in a A* Algorithm

I am confused by what the purpose of manhattan, euclidian and chebyshev in an A* Algorithm. Is it just the distance calculation or does the A* algorithm find paths in different ways depending on those metrics (vertical & horizontal or diagonally or all three). My impression of these three metrics were that they have their different methods of calculating distance as seen in this website : https://lyfat.wordpress.com/2012/05/22/euclidean-vs-chebyshev-vs-manhattan-distance/
But some people tell me that the A* algorithm moves only vertical and horizontal if the manhattan metric is used and must be drawn that way. Only diagonally for
euclidian and can move in all three directions for chebyshev.
So what I wanted to clarify was does the A* algorithm run in different directions based on the metrics (Manhattan, Chebyshev and Euclidian) or does it run on all directions but have different heuristic costs based on the metrics. I am a student and have been confused by this so any clarification possible is appreciated!
Actually, things are a little bit the other way around, i.e. we usually know the movement type that we are interested in, and this movement type determines which is the best metric (Manhattan, Chebyshev, Euclidian) to be used in the heuristic.
Changing the heuristic will not change the connectivity of neighboring cells.
In order to make the A* algorithm find paths according to a particular movement type (i.e. only horizontal+vertical, or diagonal, etc), the neighbor enumeration procedure should be set accordingly. (This enumeration of the neighbors of a node is done somewhere inside the main loop of the algorithm, after a node is popped from the queue).
In brief, not the heuristic, but the way the neighbors of a node are enumerated determines which type of movements the A* algorithm allows.
Afterwards, once a movement type was established and encoded into the algorithm as described above, it is also important to find a good heuristic. The heuristic needs to satisfy certain criteria in order to be valid (it needs to not over-estimate the distance to the target), thus some heuristics are incompatible with certain movement types. Choosing an invalid heuristic no longer guarantees that A* will find the proper solution when it's done. A good choice for the heuristic is to use precisely the one measuring distance under the selected movement type (e.g. Manhattan for horizontal/vertical, and so on).
It is also worth mentioning the octile distance, which is a very accurate estimate of the distance when traveling on a grid with neighboring diagonals allowed. It basically estimates a direct path from A to B using neighboring diagonal moves which have a cost of sqrt(2) instead of 1 for cardinal movements. In other words it is a kind of Manhattan distance but with diagonals.
A very good resource on all of those grid heuristics is found here
http://theory.stanford.edu/~amitp/GameProgramming/Heuristics.html

What are some good methods to finding a heuristic for the A* algorithm?

You have a map of square tiles where you can move in any of the 8 directions. Given that you have function called cost(tile1, tile2) which tells you the cost of moving from one adjacent tile to another, how do you find a heuristic function h(y, goal) that is both admissible and consistent? Can a method for finding the heuristic be generalized given this setting, or would it be vary differently depending on the cost function?
Amit's tutorial is one of the best I've seen on A* (Amit's page). You should find some very useful hint about heuristics on this page .
Here is the quote about your problem :
On a square grid that allows 8 directions of movement, use Diagonal distance (Lāˆž).
It depends on the cost function.
There are a couple of common heuristics, such as Euclidean distance (the absolute distance between two tiles on a 2d plane) and Manhattan distance (the sum of the absolute x and y deltas). But these assume that the actual cost is never less than a certain amount. Manhattan distance is ruled out if your agent can efficiently move diagonally (i.e. the cost of moving to a diagonal is less than 2). Euclidean distance is ruled out if the cost of moving to a neighbouring tile is less than the absolute distance of that move (e.g. maybe if the adjacent tile was "downhill" from this one).
Edit
Regardless of your cost function, you always have an admissable and consistent heuristic in h(t1, t2) = -āˆž. It's just not a good one.
Yes, the heuristic is dependent on the cost function, in a couple of ways. First, it must be in the same units. Second, you can't have a lower-cost path through actual nodes than the cost of the heuristic.
In the real world, used for things like navigation on a road network, your heuristic might be "the time a car would take on a direct path at 1.5x the speed limit." The cost for each road segment would use the actual speed limit, which will give a higher cost.
So, what is your cost function between tiles? Is it based on physical properties, or defined outside of your graph?

Optimal algorithm for path-finding in a matrix that does not fit entirely into memory

I'm facing a hard problem:
Imagine I have a map of an entire country, represented by a huge matrix of Cells. Each cell represents a 1 square meter of territory. Each Cell is represented as a double value between 0 and 1 that represents the cost of traversing the cell.
The map obviously is not fittable in memory.
I am trying to wrap my mind arround a way to calculate the optimal path for a robot, from a start point to a end position. The first idea I had was to make a TCP-like moving window, with a minimap of the real map arround the moving robot, and executing the A* algorithm inside there, but I'm facing some problems with maps with huge walls, bad pathfinding, etc...
I am searching the literature about A*-like algorithms and I could not visualize an approximation of what would be a good solution for this problem.
I'm wondering if someone has faced a similar problem or can help with a idea of a possible solution!
Thanks in advance :)
Since I do not know exact data, here's some information that could be useful:
A partial path of a shortest path is itself a shortest path. I.e. you might split up your matrix into submatrices and find (all) shortest paths in there. Note that you do not have to store all results: You e.g. can save memory by not saving a complete path but just the information: Path goes from A to B. The intermediate nodes might be computed later again or stored in a file for later. You might even be able to precompute some shortest paths for certain areas.
Another approach is that you might be able to compress your matrix in some way. I.e. if you have large areas consisting only of one and the same number, it might be good to store just that number and the dimensions of that area.
Another approach (in connection to precompute some shortest paths) is to generate different levels of detail of your map. Considering a map of the USA, this might look the following: The coarsest level of detail contains just the cities New York, Los Angeles, Chicago, Dallas, Philadelphia, Houston und Phoenix. The finer the levels get, the more cities they contain, but - on the other hand - the smaller area of your whole map is shown by them.
Does your problem have any special structure, e.g., does the triangle inequality hold/can you guarantee that the shortest path doesn't jog back and forth? Do you want to perform the query many times? (If so you can do pre-processing that will amortize over multiple queries.) Do you need the exact minimum solution, or will something within an epsilon factor be OK?
One thought was that you can coarsen the matrix - form 100 meter by 100 meter squares, and determine the shortest path distances through the 100 \times 100 squares. Now this will fit in memory (about 1 Gigabyte), you can run Dijkstra, and then expand each step through the 100 \times 100 square.
Also, have you tried running a forward-backward version of Dijkstra's algorithm? I.e., expand from the source and search forthe sink at the same time, and stop when there's an intersection.
Incidentally, why do you need such a fine level of granularity?
Here are some ideas that may work
You can model your path as a piecewise linear curve. If you have 31 line segments then your curve is fully described by 60 numbers. Each of the possible curves have a cost, so the cost is a function on the following form
cost(x1, x2, x3 ..... x60)
Now your problem is to find the global optimum of a function of 60 variables. You can use standard methods to do this. One idea is to use genetic algorithms. Another idea is to use a monte carlo method such as parallel tempering
http://en.wikipedia.org/wiki/Parallel_tempering
Whenever you have a promising path then you can use it as a starting point to find a local minimum of the cost function. Maybe you can use some interpolation to make your cost function is differentiable. Then you can use Newtons method (or rather BFGS) to find local mimima of the cost function.
http://en.wikipedia.org/wiki/Local_minimum
http://en.wikipedia.org/wiki/BFGS
Your problem is somewhat similar to the problem of finding reaction paths in chemical systems. Maybe you can find some inspiration in the book "Energy Landscapes" by Davis Wales.
But I also have some questions:
Is it necessary for you to find the optimal path, or are you just looking for an path that is OK?
How much computer power and time do you have at hand?
Can the robot make sharp turns, or do you need extra physics modelling to improve the cost function?

3D clustering Algorithm

Problem Statement:
I have the following problem:
There are more than a billion points in 3D space. The goal is to find the top N points which has largest number of neighbors within given distance R. Another condition is that the distance between any two points of those top N points must be greater than R. The distribution of those points are not uniform. It is very common that certain regions of the space contain a lot of points.
Goal:
To find an algorithm that can scale well to many processors and has a small memory requirement.
Thoughts:
Normal spatial decomposition is not sufficient for this kind of problem due to the non-uniform distribution. irregular spatial decomposition that evenly divide the number of points may help us the problem. I will really appreciate that if someone can shed some lights on how to solve this problem.
Use an Octree. For 3D data with a limited value domain that scales very well to huge data sets.
Many of the aforementioned methods such as locality sensitive hashing are approximate versions designed for much higher dimensionality where you can't split sensibly anymore.
Splitting at each level into 8 bins (2^d for d=3) works very well. And since you can stop when there are too few points in a cell, and build a deeper tree where there are a lot of points that should fit your requirements quite well.
For more details, see Wikipedia:
https://en.wikipedia.org/wiki/Octree
Alternatively, you could try to build an R-tree. But the R-tree tries to balance, making it harder to find the most dense areas. For your particular task, this drawback of the Octree is actually helpful! The R-tree puts a lot of effort into keeping the tree depth equal everywhere, so that each point can be found at approximately the same time. However, you are only interested in the dense areas, which will be found on the longest paths in the Octree without even having to look at the actual points yet!
I don't have a definite answer for you, but I have a suggestion for an approach that might yield a solution.
I think it's worth investigating locality-sensitive hashing. I think dividing the points evenly and then applying this kind of LSH to each set should be readily parallelisable. If you design your hashing algorithm such that the bucket size is defined in terms of R, it seems likely that for a given set of points divided into buckets, the points satisfying your criteria are likely to exist in the fullest buckets.
Having performed this locally, perhaps you can apply some kind of map-reduce-style strategy to combine spatial buckets from different parallel runs of the LSH algorithm in a step-wise manner, making use of the fact that you can begin to exclude parts of your problem space by discounting entire buckets. Obviously you'll have to be careful about edge cases that span different buckets, but I suspect that at each stage of merging, you could apply different bucket sizes/offsets such that you remove this effect (e.g. perform merging spatially equivalent buckets, as well as adjacent buckets). I believe this method could be used to keep memory requirements small (i.e. you shouldn't need to store much more than the points themselves at any given moment, and you are always operating on small(ish) subsets).
If you're looking for some kind of heuristic then I think this result will immediately yield something resembling a "good" solution - i.e. it will give you a small number of probable points which you can check satisfy your criteria. If you are looking for an exact answer, then you are going to have to apply some other methods to trim the search space as you begin to merge parallel buckets.
Another thought I had was that this could relate to finding the metric k-center. It's definitely not the exact same problem, but perhaps some of the methods used in solving that are applicable in this case. The problem is that this assumes you have a metric space in which computing the distance metric is possible - in your case, however, the presence of a billion points makes it undesirable and difficult to perform any kind of global traversal (e.g. sorting of the distances between points). As I said, just a thought, and perhaps a source of further inspiration.
Here are some possible parts of a solution.
There are various choices at each stage,
which will depend on Ncluster, on how fast the data changes,
and on what you want to do with the means.
3 steps: quantize, box, K-means.
1) quantize: reduce the input XYZ coordinates to say 8 bits each,
by taking 2^8 percentiles of X,Y,Z separately.
This will speed up the whole flow without much loss of detail.
You could sort all 1G points, or just a random 1M,
to get 8-bit x0 < x1 < ... x256, y0 < y1 < ... y256, z0 < z1 < ... z256
with 2^(30-8) points in each range.
To map float X -> 8 bit x, unrolled binary search is fast ā€”
see Bentley, Pearls p. 95.
Added: Kd trees
split any point cloud into different-sized boxes, each with ~ Leafsize points ā€”
much better than splitting X Y Z as above.
But afaik you'd have to roll your own Kd tree code
to split only the first say 16M boxes, and keep counts only, not the points.
2) box: count the number of points in each 3d box,
[xj .. xj+1, yj .. yj+1, zj .. zj+1].
The average box will have 2^(30-3*8) points;
the distribution will depend on how clumpy the data is.
If some boxes are too big or get too many points, you could
a) split them into 8,
b) track the centre of the points in each box,
otherwide just take box midpoints.
3)
K-means clustering
on the 2^(3*8) box centres.
(Google parallel "k means" -> 121k hits.)
This depends strongly on K aka Ncluster, also on your radius R.
A rough approach would be to grow a
heap
of the say 27*Ncluster boxes with the most points,
then take the biggest ones subject to your Radius constraint.
(I like to start with a
Minimum spanning tree,
then remove the K-1 longest links to get K clusters.)
See also
Color quantization .
I'd make Nbit, here 8, a parameter from the beginning.
What is your Ncluster ?
Added: if your points are moving in time, see
collision-detection-of-huge-number-of-circles on SO.
I would also suggest to use an octree. The OctoMap framework is very good at dealing with huge 3D point clouds. It does not store all the points directly, but updates the occupancy density of every node (aka 3D box).
After the tree is built, you can use a simple iterator to find the node with the highest density. If you would like to model the point density or distribution inside the nodes, the OctoMap is very easy to adopt.
Here you can see how it was extended to model the point distribution using a planar model.
Just an idea. Create a graph with given points and edges between points when distance < R.
Creation of this kind of graph is similar to spatial decomposition. Your questions can be answered with local search in graph. First are vertices with max degree, second is finding of maximal unconnected set of max degree vertices.
I think creation of graph and search can be made parallel. This approach can have large memory requirement. Splitting domain and working with graphs for smaller volumes can reduce memory need.

Mapping a set of 3D points to another set with minimal sum of distances

Given are two sets of three-dimensional points, a source and a destination set. The number of points on each set is arbitrary (may be zero). The task is to assign one or no source point to every destination point, so that the sum of all distances is minimal. If there are more source than destination points, the additional points are to be ignored.
There is a brute-force solution to this problem, but since the number of points may be big, it is not feasible. I heard this problem is easy in 2D with equal set sizes, but sadly these preconditions are not given here.
I'm interested in both approximations and exact solutions.
Edit: Haha, yes, I suppose it does sound like homework. Actually, it's not. I'm writing a program that receives positions of a large number of cars and i'm trying to map them to their respective parking cells. :)
One way you could approach this problem is to treat is as the classical assignment problem: http://en.wikipedia.org/wiki/Assignment_problem
You treat the points as the vertices of the graph, and the weights of the edges are the distance between points. Because the fastest algorithms assume that you are looking for maximum matching (and not minimum as in your case), and that the weights are non-negative, you can redefine weights to be e.g.:
weight(A, B) = bigNumber- distance(A,B)
where bigNumber is bigger than your longest distance.
Obviously you end up with a bipartite graph. Then you use one of the standard algorithms for maximum weighted bipartite matching (lots of resources on the web, e.g. http://valis.cs.uiuc.edu/~sariel/teach/courses/473/notes/27_matchings_notes.pdf or Wikipedia for overview: http://en.wikipedia.org/wiki/Perfect_matching#Maximum_bipartite_matchings) This way you will end-up with a O(NM max(N,M)) algoritms, where N and M are sizes of your sets of points.
Off the top of my head, spatial sort followed by simulated annealing.
Grid the space & sort the sets into spatial cells.
Solve the O(NM) problem within each cell, then within cell neighborhoods, and so on, to get a trial matching.
Finally, run lots of cycles of simulated annealing, in which you randomly alter matches, so as to explore the nearby space.
This is heuristic, getting you a good answer though not necessarily the best, and it should be fairly efficient due to the initial grid sort.
Although I don't really have an answer to your question, I can suggest looking into the following topics. (I know very little about this, but encountered it previously on Stack Overflow.)
Nearest Neighbour Search
kd-tree
Hope this helps a bit.

Resources