There are two real life problem which I am struggling to find the answer of :
Restaurant Service : When I use my food ordering app(like FoodPand, Zomato, etc.), the app detects my location when I login and accordingly suggests the nearby restaurants(probably in the range good enough so that the selected restaurant can deliver the food).
Cab Service : When I use a cab service(like Uber or Ola), they also detect my location when I try to book a cab and suggest the nearby cabs available at that time.
Question :
How is finding the nearest restaurants and nearest cabs done ? Which specific algorithm is practically used by them ? Since both cases differ as search data is static in one case and constantly changing in another case.
My Take on the question :
After doing some brainstorming on the topic, I came to know that since the restaurants are fixed entity, we can map them over KD tree(which allows for storing spatial indexes). Based on the location of the customer, we can perform a search on the KD tree to find out a set of nearby restaurants. The creation of KD tree takes O(n) time and searching takes O(logn) time, n being the number of n odes in the tree. The approach seems good enough to me as I am not aware of any better approach than this and am still looking answer for.
In the case of cab service, the positions of cabs aren't static(unlike in restaurant service). So, creating KD tree for every changed locations of cabs seems to be an overhead. How can I find say 5 nearest cabs to me given the current locations of the cabs ? Which algorithm is practically used by the cabs ?
Any insight will be highly appreciated.
P.S. : I also came across K Nearest Neighbor search algorithm which again leads to KD trees.
There exist data structure called Quadtree as a solution to 2D dynamic k-D tree. But in practice real implementation may not involve heavy data structures. You may think simpler method like this one, which may potentially outperform dynamic k-D tree:
Divide map into rectangular grids
When cab changes its' location(post request containing GPS coordinates), check if it is still in the rectangle otherwise change rectangle's inner information
When user wants to find the nearest cab, just locate its rectangle in the grid and find the nearest one with exhaustive search
You may think "why?". In the reality, in the above simple method, it would be easier to add concurrency and parallelization while it may be very hard to modify k-d tree.
Related
For a game I'm playing I would like to know the fastest order to build a number of buildings. The game in question is OGame, if you're familiar with it, then that is a plus, but obviously I will explain the basics of the game:
The player has a number of differences resources available.
The player can build buildings, maximum one building at a time.
Buildings have different levels, for example Building A level 1, Building A level 2, etc.
The resource cost to build buildings increases per level.
Buildings cost time to build too, this also increases per level.
Some of the buildings produce the different resources, this also increases per level.
Some of the buildings change the calculations such that buildings get built faster.
I have explicitly chosen to not show the equations as they are not straight-forward and should not be needed to suggest an algorithm.
I have chosen to model this with the following actions:
StartUpgradeBuildingAction: This action starts the upgrade process by subtracting the cost from the available resources.
FinishUpgradeBuildingAction: This action finishes the upgrade process by going forward in time. This also produces resources.
WaitAction: This action forwards the time by X seconds and meanwhile produces the resources according to the resource production.
It should be noted that the state space is infinite and can be characterized by the fact that there are multiple paths to the final configuration (where all requested buildings have been built), each possibly having a different time it takes and a different amount of resources you end up with in the end. Now I am most interested in the path (the order) that is the fastest, and if there are multiple equal paths, then the path that costs the least should be preferred.
I have already tried the following approaches:
Breadth-First Search
Depth-First Search
Iterative Deepening Depth-First Search
Iterative Deepening A* Search
A* Search
Unfortunately all of these algorithms either take too long, or use too much memory.
As googling has not given me any further leads, I ask the following questions here:
Is there an already existing model that matches my problem? I would think that Business Information Systems for example would have encountered this type of problem already.
Does an algorithm exist that gives the best solution? If so, which one?
Is there an algorithm that gives a solution that is close to the best solution? If so, which one?
Any help is appreciated.
There is not the one algorithm that gives you the best solution for your problem. The approaches you tried are all reasonable. However, having tried A* search doesn't mean a lot since A* search depends on a heuristic that evaluates a certain configuration (i.e. it assigns a value to the combination of the amount of time passed, number and selection of buildings built, available resources, etc.). With a good heuristic, A* search might lead you to a very good solution quickly. Finding this heuristic requires good knowledge of the parameters (costs of buildings, benefits of upgrades etc.)
However, my feeling is that your problem is structured in such a way that a series of build decisions can clearly outperform another series of decisions after a small number of steps. Lets say you build buildings A, B and C in this order. You build each one as soon as the required resources are available. Then you try the order C, A, B. You will probably find that one alternative dominates the other insofar that you have the same buildings but in one alternative you have more resources than in the other. Of course this is less likely if you have many different resources. You might have more of resource X but less of Y which makes the situations harder to compare. If it is possible, the good thing is you don't need a heuristic but you clearly see which path you should follow and which to cut off.
Anyway, I would explore how many steps it takes until you find paths that you can dismiss based on such a consideration. If you find them quick, it makes sense to follow a breadth-first strategy and prune branches as soon as possible. A depth first search bears the risk that you spend a lot of time exploring inferior paths.
I'm writing a courier/logistics simulation on OpenStreetMap maps and have realised that the basic A* algorithm as pictured below is not going to be fast enough for large maps (like Greater London).
The green nodes correspond to ones that were put in the open set/priority queue and due to the huge number (the whole map is something like 1-2 million), it takes 5 seconds or so to find the route pictured. Unfortunately 100ms per route is about my absolute limit.
Currently, the nodes are stored in both an adjacency list and also a spatial 100x100 2D array.
I'm looking for methods where I can trade off preprocessing time, space and if needed optimality of the route, for faster queries. The straight-line Haversine formula for the heuristic cost is the most expensive function according to the profiler - I have optimised my basic A* as much as I can.
For example, I was thinking if I chose an arbitrary node X from each quadrant of the 2D array and run A* between each, I can store the routes to disk for subsequent simulations. When querying, I can run A* search only in the quadrants, to get between the precomputed route and the X.
Is there a more refined version of what I've described above or perhaps a different method I should pursue. Many thanks!
For the record, here are some benchmark results for arbitrarily weighting the heuristic cost and computing the path between 10 pairs of randomly picked nodes:
Weight // AvgDist% // Time (ms)
1 1 1461.2
1.05 1 1327.2
1.1 1 900.7
1.2 1.019658848 196.4
1.3 1.027619169 53.6
1.4 1.044714394 33.6
1.5 1.063963413 25.5
1.6 1.071694171 24.1
1.7 1.084093229 24.3
1.8 1.092208509 22
1.9 1.109188175 22.5
2 1.122856792 18.2
2.2 1.131574742 16.9
2.4 1.139104895 15.4
2.6 1.140021962 16
2.8 1.14088128 15.5
3 1.156303676 16
4 1.20256964 13
5 1.19610861 12.9
Surprisingly increasing the coefficient to 1.1 almost halved the execution time whilst keeping the same route.
You should be able to make it much faster by trading off optimality. See Admissibility and optimality on wikipedia.
The idea is to use an epsilon value which will lead to a solution no worse than 1 + epsilon times the optimal path, but which will cause fewer nodes to be considered by the algorithm. Note that this does not mean that the returned solution will always be 1 + epsilon times the optimal path. This is just the worst case. I don't know exactly how it would behave in practice for your problem, but I think it is worth exploring.
You are given a number of algorithms that rely on this idea on wikipedia. I believe this is your best bet to improve the algorithm and that it has the potential to run in your time limit while still returning good paths.
Since your algorithm does deal with millions of nodes in 5 seconds, I assume you also use binary heaps for the implementation, correct? If you implemented them manually, make sure they are implemented as simple arrays and that they are binary heaps.
There are specialist algorithms for this problem that do a lot of pre-computation. From memory, the pre-computation adds information to the graph that A* uses to produce a much more accurate heuristic than straight line distance. Wikipedia gives the names of a number of methods at http://en.wikipedia.org/wiki/Shortest_path_problem#Road_networks and says that Hub Labelling is the leader. A quick search on this turns up http://research.microsoft.com/pubs/142356/HL-TR.pdf. An older one, using A*, is at http://research.microsoft.com/pubs/64505/goldberg-sp-wea07.pdf.
Do you really need to use Haversine? To cover London, I would have thought you could have assumed a flat earth and used Pythagoras, or stored the length of each link in the graph.
There's a really great article that Microsoft Research wrote on the subject:
http://research.microsoft.com/en-us/news/features/shortestpath-070709.aspx
The original paper is hosted here (PDF):
http://www.cc.gatech.edu/~thad/6601-gradAI-fall2012/02-search-Gutman04siam.pdf
Essentially there's a few things you can try:
Start from the both the source as well as the destination. This helps to minimize the amount of wasted work that you'd perform when traversing from the source outwards towards the destination.
Use landmarks and highways. Essentially, find some positions in each map that are commonly taken paths and perform some pre-calculation to determine how to navigate efficiently between those points. If you can find a path from your source to a landmark, then to other landmarks, then to your destination, you can quickly find a viable route and optimize from there.
Explore algorithms like the "reach" algorithm. This helps to minimize the amount of work that you'll do when traversing the graph by minimizing the number of vertices that need to be considered in order to find a valid route.
GraphHopper does two things more to get fast, none-heuristic and flexible routing (note: I'm the author and you can try it online here)
A not so obvious optimization is to avoid 1:1 mapping of OSM nodes to internal nodes. Instead GraphHopper uses only junctions as nodes and saves roughly 1/8th of traversed nodes.
It has efficient implements for A*, Dijkstra or e.g. one-to-many Dijkstra. Which makes a route in under 1s possible through entire Germany. The (none-heuristical) bidirectional version of A* makes this even faster.
So it should be possible to get you fast routes for greater London.
Additionally the default mode is the speed mode which makes everything an order of magnitudes faster (e.g. 30ms for European wide routes) but less flexible, as it requires preprocessing (Contraction Hierarchies). If you don't like this, just disable it and also further fine-tune the included streets for car or probably better create a new profile for trucks - e.g. exclude service streets and tracks which should give you a further 30% boost. And as with any bidirectional algorithm you could easily implement a parallel search.
I think it's worth to work-out your idea with "quadrants". More strictly, I'd call it a low-resolution route search.
You may pick X connected nodes that are close enough, and treat them as a single low-resolution node. Divide your whole graph into such groups, and you get a low-resolution graph. This is a preparation stage.
In order to compute a route from source to target, first identify the low-res nodes they belong to, and find the low-resolution route. Then improve your result by finding the route on high-resolution graph, however restricting the algorithm only to nodes that belong to hte low-resolution nodes of the low-resolution route (optionally you may also consider neighbor low-resolution nodes up to some depth).
This may also be generalized to multiple resolutions, not just high/low.
At the end you should get a route that is close enough to optimal. It's locally optimal, but may be somewhat worse than optimal globally by some extent, which depends on the resolution jump (i.e. the approximation you make when a group of nodes is defined as a single node).
There are dozens of A* variations that may fit the bill here. You have to think about your use cases, though.
Are you memory- (and also cache-) constrained?
Can you parallelize the search?
Will your algorithm implementation be used in one location only (e.g. Greater London and not NYC or Mumbai or wherever)?
There's no way for us to know all the details that you and your employer are privy to. Your first stop thus should be CiteSeer or Google Scholar: look for papers that treat pathfinding with the same general set of constraints as you.
Then downselect to three or four algorithms, do the prototyping, test how they scale up and finetune them. You should bear in mind you can combine various algorithms in the same grand pathfinding routine based on distance between the points, time remaining, or any other factors.
As has already been said, based on the small scale of your target area dropping Haversine is probably your first step saving precious time on expensive trig evaluations. NOTE: I do not recommend using Euclidean distance in lat, lon coordinates - reproject your map into a e.g. transverse Mercator near the center and use Cartesian coordinates in yards or meters!
Precomputing is the second one, and changing compilers may be an obvious third idea (switch to C or C++ - see https://benchmarksgame.alioth.debian.org/ for details).
Extra optimization steps may include getting rid of dynamic memory allocation, and using efficient indexing for search among the nodes (think R-tree and its derivatives/alternatives).
I worked at a major Navigation company, so I can say with confidence that 100 ms should get you a route from London to Athens even on an embedded device. Greater London would be a test map for us, as it's conveniently small (easily fits in RAM - this isn't actually necessary)
First off, A* is entirely outdated. Its main benefit is that it "technically" doesn't require preprocessing. In practice, you need to pre-process an OSM map anyway so that's a pointless benefit.
The main technique to give you a huge speed boost is arc flags. If you divide the map in say 5x6 sections, you can allocate 1 bit position in a 32 bits integer for each section. You can now determine for each edge whether it's ever useful when traveling to section {X,Y} from another section. Quite often, roads are bidirectional and this means only one of the two directions is useful. So one of the two directions has that bit set, and the other has it cleared. This may not appear to be a real benefit, but it means that on many intersections you reduce the number of choices to consider from 2 to just 1, and this takes just a single bit operation.
Usually A* comes along with too much memory consumption rather than time stuggles.
However I think it could be useful to first only compute with nodes that are part of "big streets" you would choose a highway over a tiny alley usually.
I guess you may already use this for your weight function but you can be faster if you use some priority Queue to decide which node to test next for further travelling.
Also you could try reducing the graph to only nodes that are part of low cost edges and then find a way from to start/end to the closest of these nodes.
So you have 2 paths from start to the "big street" and the "big street" to end.
You can now compute the best path between the two nodes that are part of the "big streets" in a reduced graph.
Old question, but yet:
Try to use different heaps that "binary heap". 'Best asymptotic complexity heap' is definetly Fibonacci Heap and it's wiki page got a nice overview:
https://en.wikipedia.org/wiki/Fibonacci_heap#Summary_of_running_times
Note that binary heap has simpler code and it's implemented over array and traversal of array is predictable, so modern CPU executes binary heap operations much faster.
However, given dataset big enough, other heaps will win over binary heap, because of their complexities...
This question seems like dataset big enough.
I am looking for a method to do fast nearest neighbour (hopefully O(log n)) for high dimensional points (typically ~11-13 dimensional). I would like it to behave optimally during insertions after having initialized the structure. KD tree came to my mind but if you do not do bulk loading but do dynamic insertions, then kd tree ceases to be balanced and afaik balancing is an expensive operation.
So, I wanted to know what data structures would you prefer for such kind of setting. You have high dimensional points and you would like to do insertions and query for nearest neighbour.
Another data structure that comes to mind is the cover tree. Unlike KD trees which were originally developed to answer range queries, this data structure is optimal for nearest neighbor queries. It has been used in n-body problems that involve computing the k nearest neighbors of all the data points. Such problems also occur in density estimation schemes (Parzen windows).
I don't know enough about your specific problem, but I do know that there are online versions of this data structure. Check out Alexander Gray's page and this link
The Curse of Dimensionality gets in the way here. You might consider applying Principal Component Analysis (PCA) to reduce the dimensionality, but as far as I know, nobody has a great answer for this.
I have dealt with this type of problem before (in audio and video fingerprinting), sometimes with up to 30 dimensions. Analysis usually revealed that some of the dimensions did not contain relevant information for searches (actually fuzzy searches, my main goal), so I omitted them from the index structures used to access the data, but included them in the logic to determine matches from a list of candidates found during the search. This effectively reduced the dimensionality to a tractable level.
I simplified things further by quantizing the remaining dimensions severely, such that the entire multidimensional space was mapped into a 32-bit integer. I used this as the key in an STL map (a red-black tree), though I could have used a hash table. I was able to add millions of records dynamically to such a structure (RAM-based, of course) in about a minute or two, and searches took about a millisecond on average, though the data was by no means evenly distributed. Searches required careful enumeration of values in the dimensions that were mapped into the 32-bit key, but were reliable enough to use in a commercial product. I believe it is used to this day in iTunes Match, if my sources are correct. :)
The bottom line is that I recommend you take a look at your data and do something custom that exploits features in it to make for fast indexing and searching. Find the dimensions that vary the most and are the most independent of each other. Quantize those and use them as the key in an index. Each bucket in the index contains all items that share that key (there will likely be more than one). To find nearest neighbors, look at "nearby" keys and within each bucket, look for nearby values. Good luck.
p.s. I wrote a paper on my technique, available here. Sorry about the paywall. Perhaps you can find a free copy elsewhere. Let me know if you have any questions about it.
If you use a Bucket Kd-Tree with a reasonably large bucket size it lets the tree get better idea of where to split when the leaves get too full. The guys in Robocode do this under extremely harsh time-constraints, with random insertions happening on the fly and kNN with k>80, d > 10 and n > 30k in under 1ms. Check out this kD-Tree Tutorial which explains a bunch of kD-Tree enhancements and how to implement them.
In my experience, 11-13 dimensions is not too bad -- if you bulk-load. Both bulk-loaded R-trees (in contrast to k-d-trees these remain balanced!) and k-d-trees should still work much better than linear scanning.
Once you go fully dynamic, my experiences are much worse. Roughly: with bulk loaded trees I'm seeing 20x speedups, with incrementally built R-trees just 7x. So it does pay off to frequently rebuild the tree. And depending on how you organize your data, it may be much faster than you think. The bulk load for the k-d-tree that I'm using is O(n log n), and I read that there is a O(n log log n) variant, too. With a low constant factor. For the R-tree, Sort-Tile-Recursive is the best bulk load I have seen so far, and also O(n log n) with a low constant factor.
So yes, in high-dimensionality I would consider to just reload the tree from time to time.
I'm implementing a bidirectional A* search (bidirectional as in the search is performed from both the origin and destination simultaneously, and when these two searches meet, I'll have my shortest path - at least with a bit of extra logic thrown in).
Does anyone have any experience with taking a unidirectional A* and bidirectionalising(!) it - what sort of performance gain can I expect? I'd reckoned on it more-or-less halving the search time, at a minimum - but may I see bigger gains that this? I'm using the algorithm to determine shortest routes on a road network - if that's in any way relevant (I've read about MS's "Reach" algorithm, but want to take baby-steps towards this rather than jumping straight in).
In the best possible case it'd run in O(b^(n/2)) instead of O(b^n), but that's only if you're lucky :)
(where b is your branching factor and n is the number of nodes a unidirectional A* would consider)
It all depends on how easily the two searches meet, if they find each other at a good halfway point early in the search you've done away with a lot of search time, but if they branch into wildly different directions you may end up with something slower than simple A* (because of all the extra bookkeeping)
You may want to try https://github.com/sroycode/tway
there is a benchmarking script that compares this with standard astar
( for NY city roads data it seems to give a 30% time benefit )
You may consider clustering as much more efficient booster. Also see this article
I'm wondering if there is an algorithm for calculating the nearest locations (represented by lat/long) in better than O(n) time.
I know I could use the Haversine formula to get the distance from the reference point to each location and sort ASC, but this is inefficient for large data sets.
How does the MySQL DISTANCE() function perform? I'm guessing O(n)?
If you use a kd-tree to store your points, you can do this in O(log n) time (expected) or O(sqrt(n)) worst case.
You mention MySql, but there are some pretty sophisticated spatial features in SQL Server 2008 including a geography data type. There's some information out there about doing the types of things you are asking about. I don't know spatial well enough to talk about perf. but I doubt there is a bounded time algorithm to do what you're asking, but you might be able to do some fast set operations on locations.
If the data set being searched is static, e.g., the coordinates of all gas stations in the US, then a proper index (BSP) would allow for efficient searching. Postgres has had good support since the mid 90's for 2-dimensional indexed data so you can do just this sort of query.
Better than O(n)? Only if you go the way of radix sort or store the locations with hash keys that represent the general location they are in.
For instance, you could divide the globe with latitude and longitude to the minutes, enumerate the resulting areas, and make the hash for a location it's area. So when the time comes to get the closest location, you only need to check at most 9 hash keys -- you can test beforehand if an adjacent grid can possibly provide a close location than the best found so far, thus decreasing the set of locations to compute the distance to. It's still O(n), but with a much smaller constant factor. Properly implemented you won't even notice it.
Or, if the data is in memory or otherwise randomly accessible, you could store it sorted by both latitude and longitude. You then use binary search to find the closest latitude and longitude in the respective data sets. Next, you keep reading locations with increasing latitude or longitude (ie, the preceding and succeeding locations), until it becomes impossible to find a closer location.
You know you can't find a close location when the latitude of the next location to either side of the latitude-sorted data wouldn't be closer than the best case found so far even if they belonged in the same longitude as the point from which distance is being calculated. A similar test applies for the longitude-sorted data.
This actually gives you better than O(n) -- closer to O(logN), I think, but does require random, instead of sequential, access to data, and duplication of all data (or the keys to the data, at least).
I wrote a article about Finding the nearest Line at DDJ a couple of years ago, using a grid (i call it quadrants). Using it to find the nearest point (instead of lines) would be just a reduction of it.
Using Quadrants reduces the time drastically, although the complexity is not determinable mathematically (all points could theoretically lie in a single quadrant). A precondition of using quadrants/grids is, that you have a maximum distance for the point searched. If you just look for the nearest point, without giving a maximum distance, you cant use quadrants.
In this case, have a look at A Template for the Nearest Neighbor Problem (Larry Andrews at DDJ), having a retrival complexity of O(log n). I did not compare the runtime of both algorithms. Probably, if you have a reasonable maximum width, quadrants are better. The better general purpose algorithm is the one from Larry Andrews.
If you are looking for the (1) closest location, there's no need to sort. Simply iterate through your list, calculating the distance to each point and keeping track of the closest one. By the time you get through the list, you'll have your answer.
Even better would be to introduce the concept of grids. You would assign each point to a grid. Then, for your search, first determine the grid you are in and perform your calculations on the points in the grid. You'll need to be a little careful though. If the test location is close to the boundary of a grid, you'll need to search those grid(s) as well. Still, this is likely to be highly performant.
I haven't looked at it myself, but Postgres does have a module dedicated to the management of GIS data.
In an appliation I worked on in a previous life we took all of the data, computed it's key for a quad-tree (for 2D space) or an oct-tree (for 3D space) and stored that in the database. It was then a simple matter of loading the values from the database (to prevent you having to recompute the quad-tree) and following the standard quad-tree search algorithm.
This does of course mean you will touch all of the data at least once to get it into the data structure. But persisting this data-structure means you can get better lookup speeds from then on. I would imagine you will do a lot of nearest-neighbour checks for each data-set.
(for kd-tree's wikipedia has a good explanation: http://en.wikipedia.org/wiki/Kd-tree)
You need a spatial index. Fortunately, MySQL provides just such an index, in its Spatial Extensions. They use an R-Tree index internally - though it shouldn't really matter what they use. The manual page referenced above has lots of details.
I guess you could do it theoretically if you had a large enough table to do this... secondly, perhaps caching correctly could get you very good average case?
An R-Tree index can be used to speed spatial searches like this. Once created, it allows such searches to be better than O(n).