What is the best algorithm for determining duplicated paths between tracking trips? - algorithm

I am developing an mobile application for recording user trips. A trip is made by sequences of user positions (with longitude and latitude values).
Now my problem is how to determine a trip has been traveled so far? On the other words, how to determine duplicated paths between trips?
(I know that we could not have 2 trips with the exact same data points, hence, I don't know how to begin with, I am looking for an algorithm could approximately address this problem).
Thank for your help!

There are a couple of trajectory distance measures that could help: Euclidean Distance, Dynamic Time Wraping, Edit Distance with Real Penalty, LCSS, ... Which one to pick depends on how you want to define similarity.
In this paper the authors describe all distance measures and evaluate them.
As far as I understand your scenario an LCSS or ERP based similarity measure might fit. A quick search brought me to this Github Repository

Related

Storing points in an efficient way for future distance calculations

I ran into some discussion about an efficient algorithm which solves the following issue:
Suppose we have a map of points and there are some points which present a parking. There is one car in some specific point in the map that suddenly looking for a parking so it needs to find (for the sake of simplicity) the most 10 closest parking.
The questions are, How to save the points in a clever way? in what order? relative to what? how to classify the distances?
Clarifications:
Saving the points by cities, does not solve the problem cause maybe in the neighboring city there is a more close parking.
Naive algorithm is obviously and unwanted - check all distances.
I would try to divide this area with squares and each square will store points(if it will be "small amount of points" (easy to compute) or 4 smaller squares containing points - by that you will be able to get easily closest point - by checking at most 4 squares. Also if this doesn't fit your needs I would recommend Nearest neighbor search mentioned by Yakov Galka Also I would think about approaches made for example by Google (Maps), you can locate nearest gas station, etc, but I cannot find anything worth reading.

Manhattan distance generalization

For a research I'm working on I'm trying to find a satisfying heuristic that is based on Manhattan distance which can work with any problem and domain as an input. Which is also known as domain-independent heuristic.
For now, I know how to apply Manhattan distance on a grid based problems.
Can someone give a tip how to generalize it to work on every domain and problem and not just grid based problems?
The generalization of Manhattan distance is simple. It is a metric which defines the distance between two multi-dimensional points as the sum of the distances along each dimension:
md(A, B) = dist(a1, b1) + dist(a2, b2) + . . .
The distances along each dimension are assumed to be simple to calculate. For numbers, the distance is the absolute value of the difference between the values.
This can be extended to other domains as well. For instance, the distance between two strings could be taken as the Levenshtein distance -- and that would prove to be an interesting metric in conjunction with other dimensions.
The manhattan distance heuristic is an attempt to measure the minimum number of steps required to find a path to the goal state. The closer you get to the actual number of steps, the fewer nodes have to be expanded during search, where at the extreme with a perfect heuristic, you only expand nodes that are guaranteed to be on the goal path.
For a more academic approach to generalizing this idea, you want to search around for domain independent heuristics; there was a lot of research done on this in the late 1990s early 2000s although even today, a small amount of domain knowledge can usually get you much better results. That being said, there are some good places to start:
delete relaxation: the expand function probably contains some restrictions, remove one or more of those restrictions and you'll end up with a much easier problem, one that can probably be solved in real time and you'll and use the value generated by that relaxed problem as the heuristic value. e.g. in the sliding tile puzzle, delete the constraint that a piece cannot move on top of other pieces and you end up with the manhattan distance, relax that a piece can only move to adjacent squares and you end up with the hamming distance heuristic.
abstraction: mapping every state in the real search to a smaller abstract state space that you can fully evaluate. Pattern databases are a very popular tool in this area.
critical paths: when you know you must pass through specific states (in either the real state space or an abstract state space) you can perform multiple searches between only the critical points to cut down greatly the number of nodes you would have to search in the full state space
landmarks: very accurate heuristics at the cost of typically high computation time. landmarks are specific locations in which you precompute the distance to every possible other state from (typically 5-25 landmarks are used depending on graph size) and then you compute the lower bound possible distance using those precomputed values when evaluating each node.
There are a few other classes of domain independent heuristics, but these are the most popular and widely used in classical planning applications.

Reduce number of nodes in 3D A* pathfinding using (part of a) uniform grid representation

I am calculating pathfinding inside a mesh which I have build a uniform grid around. The nodes (cells in the 3D grid) close to what I deem a "standable" surface I mark as accessible and they are used in my pathfinding. To get alot of detail (like being able to pathfind up small stair cases) the ammount of accessible cells in my grid have grown quite large, several thousand in larger buildings. (every grid cell is 0.5x0.5x0.5 m and the meshes are rooms with real world dimensions). Even though I only use a fraction of the actual cells in my grid for pathfinding the huge ammount slows the algorithm down. Other than that it works fine and finds the correct path through the mesh, using a weighted manhattan distance heuristic.
Imagine my grid looks like that and the mesh is inside it (can be more or less cubes but its always cubical), however the pathfinding will not be calculated on all the small cubes just a few marked as accessible (usually at the bottom of the grid but that can depend on how many floors the mesh has).
I am looking to reduce the search space for the pathfinding... I have looked at clustering like how HPA* does it and other clustering algorithms like Markov but they all seem to be best used with node graphs and not grids. One obvious solution would be to just increase the size of the small cubes building the grid but then I would lose alot of detail in the pathfinding and it would not be as robust. How could I cluster these small cubes? This is how a typical search space looks when I do my pathfinding (blue are accessible, green is path):
and as you see there is a lot of cubes to search through because the distance between them is quite small!
Never mind that the grid is an unoptimal solution for pathfinding for now.
Does anyone have an idea on how to reduce the ammount of cubes in the grid I have to search through and how would I access the neighbors after I reduce the space? :) Right now it only looks at the closest neighbors while expanding the search space.
A couple possibilities come to mind.
Higher-level Pathfinding
The first is that your A* search may be searching the entire problem space. For example, you live in Austin, Texas, and want to get into a particular building somewhere in Alberta, Canada. A simple A* algorithm would search a lot of Mexico and the USA before finally searching Canada for the building.
Consider creating a second layer of A* to solve this problem. You'd first find out which states to travel between to get to Canada, then which provinces to reach Alberta, then Calgary, and then the Calgary Zoo, for example. In a sense, you start with an overview, then fill it in with more detailed paths.
If you have enormous levels, such as skyrim's, you may need to add pathfinding layers between towns (multiple buildings), regions (multiple towns), and even countries (multiple regions). If you were making a GPS system, you might even need continents. If we'd become interstellar, our spaceships might contain pathfinding layers for planets, sectors, and even galaxies.
By using layers, you help to narrow down your search area significantly, especially if different areas don't use the same co-ordinate system! (It's fairly hard to estimate distance for one A* pathfinder if one of the regions needs latitude-longitude, another 3d-cartesian, and the next requires pathfinding through a time dimension.)
More efficient algorithms
Finding efficient algorithms becomes more important in 3 dimensions because there are more nodes to expand while searching. A Dijkstra search which expands x^2 nodes would search x^3, with x being the distance between the start and goal. A 4D game would require yet more efficiency in pathfinding.
One of the benefits of grid-based pathfinding is that you can exploit topographical properties like path symmetry. If two paths consist of the same movements in a different order, you don't need to find both of them. This is where a very efficient algorithm called Jump Point Search comes into play.
Here is a side-by-side comparison of A* (left) and JPS (right). Expanded/searched nodes are shown in red with walls in black:
Notice that they both find the same path, but JPS easily searched less than a tenth of what A* did.
As of now, I haven't seen an official 3-dimensional implementation, but I've helped another user generalize the algorithm to multiple dimensions.
Simplified Meshes (Graphs)
Another way to get rid of nodes during the search is to remove them before the search. For example, do you really need nodes in wide-open areas where you can trust a much more stupid AI to find its way? If you are building levels that don't change, create a script that parses them into the simplest grid which only contains important nodes.
This is actually called 'offline pathfinding'; basically finding ways to calculate paths before you need to find them. If your level will remain the same, running the script for a few minutes each time you update the level will easily cut 90% of the time you pathfind. After all, you've done most of the work before it became urgent. It's like trying to find your way around a new city compared to one you grew up in; knowing the landmarks means you don't really need a map.
Similar approaches to the 'symmetry-breaking' that Jump Point Search uses were introduced by Daniel Harabor, the creator of the algorithm. They are mentioned in one of his lectures, and allow you to preprocess the level to store only jump-points in your pathfinding mesh.
Clever Heuristics
Many academic papers state that A*'s cost function is f(x) = g(x) + h(x), which doesn't make it obvious that you may use other functions, multiply the weight of the cost functions, and even implement heatmaps of territory or recent deaths as functions. These may create sub-optimal paths, but they greatly improve the intelligence of your search. Who cares about the shortest path when your opponent has a choke point on it and has been easily dispatching anybody travelling through it? Better to be certain the AI can reach the goal safely than to let it be stupid.
For example, you may want to prevent the algorithm from letting enemies access secret areas so that they avoid revealing them to the player, and so that they AI seems to be unaware of them. All you need to achieve this is a uniform cost function for any point within those 'off-limits' regions. In a game like this, enemies would simply give up on hunting the player after the path grew too costly. Another cool option is to 'scent' regions the player has been recently (by temporarily increasing the cost of unvisited locations because many algorithms dislike negative costs).
If you know what places you won't need to search, but can't implement in your algorithm's logic, a simple increase to their cost will prevent unnecessary searching. There's a lot of ways to take advantage of heuristics to simplify and inform your pathfinding, but your biggest gains will come from Jump Point Search.
EDIT: Jump Point Search implicitly selects pathfinding direction using the same heuristics as A*, so you may be able to implement heuristics to a small degree, but their cost function won't be the cost of a node, but rather, the cost of traveling between the two nodes. (A* generally searches adjacent nodes, so the distinction between a node's cost and the cost of traveling to it tends to break down.)
Summary
Although octrees/quad-trees/b-trees can be useful in collision-detection, they aren't as applicable to searches because they section a graph based on its coordinates; not on its connections. Layering your graph (mesh in your vocabulary) into super graphs (regions) is a more effective solution.
Hopefully I've covered anything you'll find useful.
Good luck!

What data do I need to implement k nearest neighbor?

I currently have a reddit-clone type website. I'm trying to recommend posts based on the posts that my users have previously liked.
It seems like K nearest neighbor or k means are the best way to do this.
I can't seem to understand how to actually implement this. I've seen some mathematical formulas (such as the one on the k means wikipedia page), but they don't really make sense to me.
Could someone maybe recommend some pseudo code, or places to look so I can get a better feel on how to do this?
K-Nearest Neighbor (aka KNN) is a classification algorithm.
Basically, you take a training group of N items and classify them. How you classify them is completely dependent on your data, and what you think the important classification characteristics of that data are. In your example, this may be category of posts, who posted the item, who upvoted the item, etc.
Once this 'training' data has been classified, you can then evaluate an 'unknown' data point. You determine the 'class' of the unknown by locating the nearest neighbors to it in the classification system. If you determine the classification by the 3 nearest neighbors, it could then be called a 3-nearest neighboring algorithm.
How you determine the 'nearest neighbor' depends heavily on how you classify your data. It is very common to plot the data into N-dimensional space where N represents the number of different classification characteristics you are examining.
A trivial example:
Let's say you have the longitude/latitude coordinates of a location that can be on any landmass anywhere in the world. Let us also assume that you do not have a map, but you do have a very large data set that gives you the longitude/latitude of many different cities in the world, and you also know which country those cities are in.
If I asked you which country the a random longitude latitude point is in, would you be able to figure it out? What would you do to figure it out?
Longitude/latitude data falls naturally into an X,Y graph. So, if you plotted out all the cities onto this graph, and then the unknown point, how would you figure out the country of the unknown? You might start drawing circles around that point, growing increasingly larger until the circle encompasses the 10 nearest cities on the plot. Now, you can look at the countries of those 10 cities. If all 10 are in the USA, then you can say with a fair degree of certainty that your unknown point is also in the USA. But if only 6 cities are in the USA, and the other 4 are in Canada, can you say where your unknown point is? You may still guess USA, but with less certainty.
The toughest part of KNN is figuring out how to classify your data in a way that you can determine 'neighbors' of similar quality, and the distance to those neighbors.
What you described sounds like a recommender system engine, not a clustering algorithm like k-means which in essence is an unsupervised approach. I cannot make myself a clear idea of what reddit uses actually, but I found some interesting post by googling around "recommender + reddit", e.g. Reddit, Stumbleupon, Del.icio.us and Hacker News Algorithms Exposed! Anyway, the k-NN algorithm (described in the top ten data mining algorithm, with pseudo-code on Wikipedia) might be used, or other techniques like Collaborative filtering (used by Amazon, for example), described in this good tutorial.
k-Means clustering in its simplest form is averaging values and keep other average values around one central average value. Suppose you have the following values
1,2,3,4,6,7,8,9,10,11,12,21,22,33,40
Now if I do k-means clustering and remember that the k-means clustering will have a biasing (means/averaging) mechanism that shall either put values close to the center or far away from it. And we get the following.
cluster-1
1,2,3,4,5,6,7,8
cluster-2
10,11,12
cluster-3
21,22
cluster-4
33
cluster-5
40
Remember I just made up these cluster centers (cluster 1-5).
So the next, time you do clustering, the numbers would end up around any of these central means (also known as k-centers). The data above is single dimensional.
When you perform kmeans clustering on large data sets, with multi dimension (A multidimensional data is an array of values, you will have millions of them of the same dimension), you will need something bigger and scalable. You will first average one array, you will get a single value, like wise you will repeat the same for other arrays, and then perform the kmean clustering.
Read one of my questions Here
Hope this helps.
To do k-nearest neighbors you mostly need a notion of distance and a way of finding the k nearest neighbours to a point that you can afford (you probably don't want to search through all your data points one by one). There is a library for approximate nearest neighbour at http://www.cs.umd.edu/~mount/ANN/. It's a very simple classification algorithm - to classify a new point p, find its k nearest neighbours and classify p according to the most popular classes amongst those k neighbours.
I guess in your case you could provide somebody with a list of similar posts as soon as you decide what nearest means, and then monitor click-through from this and try to learn from that to predict which of those alternatives would be most popular.
If you are interested in finding a particularly good learning algorithm for your purposes, have a look at http://www.cs.waikato.ac.nz/ml/weka/ - it allows you to try out a large number of different algorithms, and also to write your own as plug-ins.
Here is a very simple example of KNN for the MINST dataset
Once you are able to calculate distance between your documents, the same algorithm would work
http://shyamalapriya.github.io/digit-recognition-using-k-nearest-neighbors/

Optimal algorithm for path-finding in a matrix that does not fit entirely into memory

I'm facing a hard problem:
Imagine I have a map of an entire country, represented by a huge matrix of Cells. Each cell represents a 1 square meter of territory. Each Cell is represented as a double value between 0 and 1 that represents the cost of traversing the cell.
The map obviously is not fittable in memory.
I am trying to wrap my mind arround a way to calculate the optimal path for a robot, from a start point to a end position. The first idea I had was to make a TCP-like moving window, with a minimap of the real map arround the moving robot, and executing the A* algorithm inside there, but I'm facing some problems with maps with huge walls, bad pathfinding, etc...
I am searching the literature about A*-like algorithms and I could not visualize an approximation of what would be a good solution for this problem.
I'm wondering if someone has faced a similar problem or can help with a idea of a possible solution!
Thanks in advance :)
Since I do not know exact data, here's some information that could be useful:
A partial path of a shortest path is itself a shortest path. I.e. you might split up your matrix into submatrices and find (all) shortest paths in there. Note that you do not have to store all results: You e.g. can save memory by not saving a complete path but just the information: Path goes from A to B. The intermediate nodes might be computed later again or stored in a file for later. You might even be able to precompute some shortest paths for certain areas.
Another approach is that you might be able to compress your matrix in some way. I.e. if you have large areas consisting only of one and the same number, it might be good to store just that number and the dimensions of that area.
Another approach (in connection to precompute some shortest paths) is to generate different levels of detail of your map. Considering a map of the USA, this might look the following: The coarsest level of detail contains just the cities New York, Los Angeles, Chicago, Dallas, Philadelphia, Houston und Phoenix. The finer the levels get, the more cities they contain, but - on the other hand - the smaller area of your whole map is shown by them.
Does your problem have any special structure, e.g., does the triangle inequality hold/can you guarantee that the shortest path doesn't jog back and forth? Do you want to perform the query many times? (If so you can do pre-processing that will amortize over multiple queries.) Do you need the exact minimum solution, or will something within an epsilon factor be OK?
One thought was that you can coarsen the matrix - form 100 meter by 100 meter squares, and determine the shortest path distances through the 100 \times 100 squares. Now this will fit in memory (about 1 Gigabyte), you can run Dijkstra, and then expand each step through the 100 \times 100 square.
Also, have you tried running a forward-backward version of Dijkstra's algorithm? I.e., expand from the source and search forthe sink at the same time, and stop when there's an intersection.
Incidentally, why do you need such a fine level of granularity?
Here are some ideas that may work
You can model your path as a piecewise linear curve. If you have 31 line segments then your curve is fully described by 60 numbers. Each of the possible curves have a cost, so the cost is a function on the following form
cost(x1, x2, x3 ..... x60)
Now your problem is to find the global optimum of a function of 60 variables. You can use standard methods to do this. One idea is to use genetic algorithms. Another idea is to use a monte carlo method such as parallel tempering
http://en.wikipedia.org/wiki/Parallel_tempering
Whenever you have a promising path then you can use it as a starting point to find a local minimum of the cost function. Maybe you can use some interpolation to make your cost function is differentiable. Then you can use Newtons method (or rather BFGS) to find local mimima of the cost function.
http://en.wikipedia.org/wiki/Local_minimum
http://en.wikipedia.org/wiki/BFGS
Your problem is somewhat similar to the problem of finding reaction paths in chemical systems. Maybe you can find some inspiration in the book "Energy Landscapes" by Davis Wales.
But I also have some questions:
Is it necessary for you to find the optimal path, or are you just looking for an path that is OK?
How much computer power and time do you have at hand?
Can the robot make sharp turns, or do you need extra physics modelling to improve the cost function?

Resources