dynamic space partitioning tree data structures? - algorithm

I have an application where I need to do nearest neighbor, rectangle/polygon overlap and other basic computational geometry operations against dynamically changing data (all 2d.) I understand the basic data structures in the static case (quadtrees, 2-dimensional Kd-trees, R-trees, BSP, etc.) but I want to understand the state of the art in the dynamic case. The difficulty seems to be knowning when/how to balance on insertion and deletion. For example, is there a dynamic data structure that answers k-nearest neighbors against n points in O(log n + k), where insertion and deletion take O(log n) (amortized, maybe)? Is there a standard reference that summarizes what's known about this problem?

To be honest, I haven't done too much with dynamic trees myself (mostly static). But I believe the Bkd-tree paper (early 2000s?) is good source to start. I believe it has been referenced many times since then. You can use sources like acm/citeseer to track newer papers that reference it. Side note: i think there is public code available for Bkds so you can play with it without investing too much time - see if it works for you.
Bkd-Tree: A Dynamic Scalable kd-Tree
Octavian Procopiuc, Pankaj K. Agarwal, Lars Arge, and Jeffrey Scott Vitter

You can try a monster curve (space filling curve). A fast algorithm is to simply interleave the x and y coordinate. It also possible for 3 dimensions.

Related

R-Tree construction algorithm using polygon MBR

I cannot seem to find any documentation on how to construct an R-Tree when I have all the known Minimum Bounding Rectangles (MBR) of the polygons in my set. The R-Tree would be ideal for storing these spatial references to eliminate my current brute force inspecting for polygon intersection:
for p1 in polygons: # O(n)
for p2 in polygons: # O(n)
if p2 is not p1: # O(1)
if p2.intersects(p1): # O(1); computed using DeMorgans law on vertices
# do stuff
Does anybody have a reference that denotes methods of how to determine the partitioning of the rectangles that encompass the MBRs of the polygons in the set?
There are many R-Tree partitioning algorithm, in my experience the best one is R*Tree (R-Star-Tree) by Beckmann et al. Just search for "The R*-tree: an efficient and robust access method for points and rectangles".
If you prefer reading code, there are also many open-source implementations, including my own one in Java. Be warned though, the R*Tree algorithm is not trivial.
If you are looking for something simpler, try quadtrees or octrees. If insertion and update speed is top priority, have a look at the PH-Tree (again my own implementation), but it also more complicated than quadtrees.
Another simpler solution is the AABB-Tree, it's like an R-Tree but with only two bounding boxes per node. It's used a lot in computer graphics I think, but I don't know much about it except that it is relatively simple for an R-Tree.
EDIT (Update to answer comment)
If you are looking for a bulk loading strategy such as STR, here is the original paper. You can have a look at my R-Tree implementation, as it also provides an implementation of an STR-Loader that can handle points and rectangles.
Searching stack overflow I also found this answer which apparently points to an alternative bulk loader specifically for storing rectangles.
I would like to point out that bulk loading (such as STR-Loading) is the fasted way to load an R-Tree. However, in my own experiments (see Figure 3 here), this is still 2-3 times slower than a good quadtree or a PH-Tree.

kNN with dynamic insertions in high-dim space

I am looking for a method to do fast nearest neighbour (hopefully O(log n)) for high dimensional points (typically ~11-13 dimensional). I would like it to behave optimally during insertions after having initialized the structure. KD tree came to my mind but if you do not do bulk loading but do dynamic insertions, then kd tree ceases to be balanced and afaik balancing is an expensive operation.
So, I wanted to know what data structures would you prefer for such kind of setting. You have high dimensional points and you would like to do insertions and query for nearest neighbour.
Another data structure that comes to mind is the cover tree. Unlike KD trees which were originally developed to answer range queries, this data structure is optimal for nearest neighbor queries. It has been used in n-body problems that involve computing the k nearest neighbors of all the data points. Such problems also occur in density estimation schemes (Parzen windows).
I don't know enough about your specific problem, but I do know that there are online versions of this data structure. Check out Alexander Gray's page and this link
The Curse of Dimensionality gets in the way here. You might consider applying Principal Component Analysis (PCA) to reduce the dimensionality, but as far as I know, nobody has a great answer for this.
I have dealt with this type of problem before (in audio and video fingerprinting), sometimes with up to 30 dimensions. Analysis usually revealed that some of the dimensions did not contain relevant information for searches (actually fuzzy searches, my main goal), so I omitted them from the index structures used to access the data, but included them in the logic to determine matches from a list of candidates found during the search. This effectively reduced the dimensionality to a tractable level.
I simplified things further by quantizing the remaining dimensions severely, such that the entire multidimensional space was mapped into a 32-bit integer. I used this as the key in an STL map (a red-black tree), though I could have used a hash table. I was able to add millions of records dynamically to such a structure (RAM-based, of course) in about a minute or two, and searches took about a millisecond on average, though the data was by no means evenly distributed. Searches required careful enumeration of values in the dimensions that were mapped into the 32-bit key, but were reliable enough to use in a commercial product. I believe it is used to this day in iTunes Match, if my sources are correct. :)
The bottom line is that I recommend you take a look at your data and do something custom that exploits features in it to make for fast indexing and searching. Find the dimensions that vary the most and are the most independent of each other. Quantize those and use them as the key in an index. Each bucket in the index contains all items that share that key (there will likely be more than one). To find nearest neighbors, look at "nearby" keys and within each bucket, look for nearby values. Good luck.
p.s. I wrote a paper on my technique, available here. Sorry about the paywall. Perhaps you can find a free copy elsewhere. Let me know if you have any questions about it.
If you use a Bucket Kd-Tree with a reasonably large bucket size it lets the tree get better idea of where to split when the leaves get too full. The guys in Robocode do this under extremely harsh time-constraints, with random insertions happening on the fly and kNN with k>80, d > 10 and n > 30k in under 1ms. Check out this kD-Tree Tutorial which explains a bunch of kD-Tree enhancements and how to implement them.
In my experience, 11-13 dimensions is not too bad -- if you bulk-load. Both bulk-loaded R-trees (in contrast to k-d-trees these remain balanced!) and k-d-trees should still work much better than linear scanning.
Once you go fully dynamic, my experiences are much worse. Roughly: with bulk loaded trees I'm seeing 20x speedups, with incrementally built R-trees just 7x. So it does pay off to frequently rebuild the tree. And depending on how you organize your data, it may be much faster than you think. The bulk load for the k-d-tree that I'm using is O(n log n), and I read that there is a O(n log log n) variant, too. With a low constant factor. For the R-tree, Sort-Tile-Recursive is the best bulk load I have seen so far, and also O(n log n) with a low constant factor.
So yes, in high-dimensionality I would consider to just reload the tree from time to time.

How to find the closest 2 points in a 100 dimensional space with 500,000 points?

I have a database with 500,000 points in a 100 dimensional space, and I want to find the closest 2 points. How do I do it?
Update: Space is Euclidean, Sorry. And thanks for all the answers. BTW this is not homework.
There's a chapter in Introduction to Algorithms devoted to finding two closest points in two-dimensional space in O(n*logn) time. You can check it out on google books. In fact, I suggest it for everyone as the way they apply divide-and-conquer technique to this problem is very simple, elegant and impressive.
Although it can't be extended directly to your problem (as constant 7 would be replaced with 2^101 - 1), it should be just fine for most datasets. So, if you have reasonably random input, it will give you O(n*logn*m) complexity where n is the number of points and m is the number of dimensions.
edit
That's all assuming you have Euclidian space. I.e., length of vector v is sqrt(v0^2 + v1^2 + v2^2 + ...). If you can choose metric, however, there could be other options to optimize the algorithm.
Use a kd tree. You're looking at a nearest neighbor problem and there are highly optimized data structures for handling this exact class of problems.
http://en.wikipedia.org/wiki/Kd-tree
P.S. Fun problem!
You could try the ANN library, but that only gives reliable results up to 20 dimensions.
Run PCA on your data to convert vectors from 100 dimensions to say 20 dimensions. Then create a K-Nearest Neighbor tree (KD-Tree) and get the closest 2 neighbors based on euclidean distance.
Generally if no. of dimensions are very large then you have to either do a brute force approach (parallel + distributed/map reduce) or a clustering based approach.
Use the data structure known as a KD-TREE. You'll need to allocate a lot of memory, but you may discover an optimization or two along the way based on your data.
http://en.wikipedia.org/wiki/Kd-tree.
My friend was working on his Phd Thesis years ago when he encountered a similar problem. His work was on the order of 1M points across 10 dimensions. We built a kd-tree library to solve it. We may be able to dig-up the code if you want to contact us offline.
Here's his published paper:
http://www.elec.qmul.ac.uk/people/josh/documents/ReissSelbieSandler-WIAMIS2003.pdf

Performance of an A* search implemented in Clojure

I have implemented an A* search algorithm for finding a shortest path between two states.
Algorithm uses a hash-map for storing best known distances for visited states. And one hash-map for storing child-parent relationships needed for reconstruction of the shortest path.
Here is the code. Implementation of the algorithm is generic (states only need to be "hashable" and "comparable") but in this particular case states are pairs (vectors) of ints [x y] and they represent one cell in a given heightmap (cost for jumping to neighboring cell depends on the difference in heights).
Question is whether it's possible to improve performance and how? Maybe by using some features from 1.2 or future versions, by changing logic of the algorithm implementation (e.g. using different way to store path) or changing state representation in this particular case?
Java implementation runs in an instant for this map and Clojure implementation takes about 40 seconds. Of course, there are some natural and obvious reasons for this: dynamic typing, persistent data structures, unnecessary (un)boxing of primitive types...
Using transients didn't make much difference.
Using priority-map instead of sorted-set
I first used sorted-set for storing open nodes (search frontier), switching to priority-map improved performance: now it takes 15-20 seconds for this map (before it took 40s).
This blog post was very helpful. And "my" new implementation is pretty much the same.
New a*-search can be found here.
I don't know Clojure, but I can give you some general advice about improving the performance of Vanilla A*.
Consider implementing IDA*, which is a variant of A* that uses less memory, if it's suitable for your domain.
Try a different heuristic. A good heuristic can have a significant impact on the number of node expansions required.
Use a cache, Often called a "transposition table" in search algorithms. Since search graphs are usually Directed Acyclic Graphs and not true trees, you can end up repeating the search of a state more than once; a cache to remember previously-searched nodes reduces node expansions.
Dr. Jonathan Schaeffer has some slides on this subject:
http://webdocs.cs.ualberta.ca/~jonathan/Courses/657/Notes/10.Single-agentSearch.pdf
http://webdocs.cs.ualberta.ca/~jonathan/Courses/657/Notes/11.Evaluations.pdf

Determining the best k for a k nearest neighbour

I have need to do some cluster analysis on a set of 2 dimensional data (I may add extra dimensions along the way).
The analysis itself will form part of the data being fed into a visualisation, rather than the inputs into another process (e.g. Radial Basis Function Networks).
To this end, I'd like to find a set of clusters which primarily "looks right", rather than elucidating some hidden patterns.
My intuition is that k-means would be a good starting place for this, but that finding the right number of clusters to run the algorithm with would be problematic.
The problem I'm coming to is this:
How to determine the 'best' value for k such that the clusters formed are stable and visually verifiable?
Questions:
Assuming that this isn't NP-complete, what is the time complexity for finding a good k. (probably reported in number of times to run the k-means algorithm).
is k-means a good starting point for this type of problem? If so, what other approaches would you recommend. A specific example, backed by an anecdote/experience would be maxi-bon.
what short cuts/approximations would you recommend to increase the performance.
For problems with an unknown number of clusters, agglomerative hierarchical clustering is often a better route than k-means.
Agglomerative clustering produces a tree structure, where the closer you are to the trunk, the fewer the number of clusters, so it's easy to scan through all numbers of clusters. The algorithm starts by assigning each point to its own cluster, and then repeatedly groups the two closest centroids. Keeping track of the grouping sequence allows an instant snapshot for any number of possible clusters. Therefore, it's often preferable to use this technique over k-means when you don't know how many groups you'll want.
There are other hierarchical clustering methods (see the paper suggested in Imran's comments). The primary advantage of an agglomerative approach is that there are many implementations out there, ready-made for your use.
In order to use k-means, you should know how many cluster there is. You can't try a naive meta-optimisation, since the more cluster you'll add (up to 1 cluster for each data point), the more it will brought you to over-fitting. You may look for some cluster validation methods and optimize the k hyperparameter with it but from my experience, it rarely work well. It's very costly too.
If I were you, I would do a PCA, eventually on polynomial space (take care of your available time) depending on what you know of your input, and cluster along the most representatives components.
More infos on your data set would be very helpful for a more precise answer.
Here's my approximate solution:
Start with k=2.
For a number of tries:
Run the k-means algorithm to find k clusters.
Find the mean square distance from the origin to the cluster centroids.
Repeat the 2-3, to find a standard deviation of the distances. This is a proxy for the stability of the clusters.
If stability of clusters for k < stability of clusters for k - 1 then return k - 1
Increment k by 1.
The thesis behind this algorithm is that the number of sets of k clusters is small for "good" values of k.
If we can find a local optimum for this stability, or an optimal delta for the stability, then we can find a good set of clusters which cannot be improved by adding more clusters.
In a previous answer, I explained how Self-Organizing Maps (SOM) can be used in visual clustering.
Otherwise, there exist a variation of the K-Means algorithm called X-Means which is able to find the number of clusters by optimizing the Bayesian Information Criterion (BIC), in addition to solving the problem of scalability by using KD-trees.
Weka includes an implementation of X-Means along with many other clustering algorithm, all in an easy to use GUI tool.
Finally you might to refer to this page which discusses the Elbow Method among other techniques for determining the number of clusters in a dataset.
You might look at papers on cluster validation. Here's one that is cited in papers that involve microarray analysis, which involves clustering genes with related expression levels.
One such technique is the Silhouette measure that evaluates how closely a labeled point is to its centroid. The general idea is that, if a point is assigned to one centroid but is still close to others, perhaps it was assigned to the wrong centroid. By counting these events across training sets and looking across various k-means clusterings, one looks for the k such that the labeled points overall fall into the "best" or minimally ambiguous arrangement.
It should be said that clustering is more of a data visualization and exploration technique. It can be difficult to elucidate with certainty that one clustering explains the data correctly, above all others. It's best to merge your clusterings with other relevant information. Is there something functional or otherwise informative about your data, such that you know some clusterings are impossible? This can reduce your solution space considerably.
From your wikipedia link:
Regarding computational complexity,
the k-means clustering problem is:
NP-hard in general Euclidean
space d even for 2 clusters
NP-hard for a general number of
clusters k even in the plane
If k and d are fixed, the problem can be
exactly solved in time O(ndk+1 log n),
where n is the number of entities to
be clustered
Thus, a variety of heuristic
algorithms are generally used.
That said, finding a good value of k is usually a heuristic process (i.e. you try a few and select the best).
I think k-means is a good starting point, it is simple and easy to implement (or copy). Only look further if you have serious performance problems.
If the set of points you want to cluster is exceptionally large a first order optimisation would be to randomly select a small subset, use that set to find your k-means.
Choosing the best K can be seen as a Model Selection problem. One possible approach is Minimum Description Length, which in this context means: You could store a table with all the points (in which case K=N). At the other extreme, you have K=1, and all the points are stored as their distances from a single centroid. This Section from Introduction to Information Retrieval by Manning and Schutze suggest minimising the Akaike Information Criterion as a heuristic for an optimal K.
This problematic belongs to the "internal evaluation" class of "clustering optimisation problems" which curent state of the art solution seems to use the **Silhouette* coeficient* as stated here
https://en.wikipedia.org/wiki/Cluster_analysis#Applications
and here:
https://en.wikipedia.org/wiki/Silhouette_(clustering) :
"silhouette plots and averages may be used to determine the natural number of clusters within a dataset"
scikit-learn provides a sample usage implementation of the methodology here
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html

Resources