What are some fast approximations of Nearest Neighbor? - algorithm

Say I have a huge (a few million) list of n vectors, given a new vector, I need to find a pretty close one from the set but it doesn't need to be the closest. (Nearest Neighbor finds the closest and runs in n time)
What algorithms are there that can approximate nearest neighbor very quickly at the cost of accuracy?
EDIT: Since it will probably help, I should mention the data are pretty smooth most of the time, with a small chance of spikiness in a random dimension.

There are exist faster algoritms then O(n) to search closest element by arbitary distance. Check http://en.wikipedia.org/wiki/Kd-tree for details.

If you are using high-dimension vector, like SIFT or SURF or any descriptor used in multi-media sector, I suggest your consider LSH.
A PhD dissertation from Wei Dong (http://www.cs.princeton.edu/cass/papers/cikm08.pdf) might help you find the updated algorithm of KNN search, i.e, LSH. Different from more traditional LSH, like E2LSH (http://www.mit.edu/~andoni/LSH/) published earlier by MIT researchers, his algorithm uses multi-probing to better balance the trade-off between recall rate and cost.

A web search on "nearest neighbor" lsh library finds
http://www.mit.edu/~andoni/LSH/
http://www.cs.umd.edu/~mount/ANN/
http://msl.cs.uiuc.edu/~yershova/MPNN/MPNN.htm

For approximate nearest neighbour, the fastest way is to use locality sensitive hashing (LSH). There are many variants of LSHs. You should choose one depending on the distance metric of your data. The big-O of the query time for LSH is independent of the dataset size (not considering time for output result). So it is really fast. This LSH library implements various LSH for L2 (Euclidian) space.
Now, if the dimension of your data is less than 10, kd tree is preferred if you want exact result.

Related

How to optimize the algorithm used to calculate K-nearest neighbor algorithm?

KNN is such a straightforward algorithm that's easy to implement:
# for each test datapoint in X_test:
# calculate its distance from every points in X_train
# find the top k most closest points
# take majority vote of the k neighbors and use that as prediction for this test data point
Yet I think the time complexity is not good enough. How is the algorithm optimized when it is implemented in reality? (like what trick or data structure it's using?)
The k-nearest neighbor algorithm differs from other learning methods because no
model is induced from the training examples. The data remains as they are; they
are simply stored in memory.
A genetic algorithm is combined with k-NN to improve performance. Another successful technique known as instance
selection is also proposed to face simultaneously, the efficient storage and noise of
k-NN. you can try this: when a new instance should be classified; instead of
involving all learning instances to retrieve the k-neighbors which will increase the
computing time, a selection of a smaller subset of instances is first performed.
you can also try:
Improving k-NN speed by reducing the number of training
documents
Improving k-NN by neighborhood size and similarity
function
Improving k-NN by advanced storage structures
What you describe is the brute force kNN calculation with O(size(X_test)*size(X_train)*d), where d is the number of dimensions in the feature vectors.
More efficient solution use spatial indexing to put an index on the X_train data. This typically reduces the individual lookups to O( log(size(X_train)) * d) or even O( log(size(X_train)) + d).
Common spatial indexes are:
kD-Trees (they are often used, but scale badly with 'd')
R-Trees, such as the RStarTree
Quadtrees (Usually not efficient for large 'd', but for example the PH-Tree works well with d=1000 and has excellent remove/insertion times (disclaimer, this is my own work))
BallTrees (I don't really know much about them)
CoverTrees (Very fast lookup for high 'd', but long build-up times
There are also the class of 'approximate' NN searches/queries. These trade correctness with speed, they may skip a few of the closest neighbors. You can find a performance comparison and numerous implementations in python here.
If you are looking for Java implementations of some of the spatial indexes above, have a look at my implementations.

Given n points in a 2-D plane we have to find k nearest neighbours of each point among themselves

I explored the method using a min-heap. For each point we can store a min heap of size k but it takes too much space for large n(I m targeting for n around a 100 million). Surely there must be a better way of doing this utilising lesser space and not affecting time complexity that much. Is there some other data structure?
This problem is typical setup for KD-tree. Such solution would have linearithmic complexity but may be relatively complex to implement(if a ready implementation is not available)
An alternative approach could be using bucketing to reduce the complexity of the naive algorithm. The idea is to separate the plane into "buckets" i.e. squares of some size and place the points in the bucket they belong to. The closest points will be from the closest buckets. In case of random data this could be quite good improvement but the worst case is still the same as the naive approach.

approximate nearest neighbor (A1NN) for high dimension spaces

I read this question about finding the closest neighbor for 3-dimensions points. Octree is a solution for this case.
kd-Tree is a solution for small spaces (generally less than 50 dimensions).
For higher dimensions (vectors of hundreds of dimensions and millions of points) LSH is a popular solution for solving the AKNN (Aproxximate K-NN) problem, as pointed out in this question.
However, LSH is popular for K-NN solutions, where K>>1. For example, LSH has been successfully used for Content Based Image Retrieval (CBIR) applications, where each image is represented through a vector of hundreds of dimensions and the dataset is millions (or billions) of images. In this case, K is the number of top-K most similar images w.r.t. the query image.
But what if we are interested just to the most approximate similar neighbor (i.e. A1-NN) in high dimensional spaces? LSH is still the winner, or ad-hoc solutions have been proposed?
You might look at http://papers.nips.cc/paper/2666-an-investigation-of-practical-approximate-nearest-neighbor-algorithms.pdf and http://research.microsoft.com/en-us/um/people/jingdw/pubs%5CTPAMI-TPTree.pdf. Both have figures and graphs showing the perfomance of LSH vs the performance of tree-based methods which also produce only approximate answers, for different values of k including k=1. The Microsoft paper claims that "It has been shown in [34] that randomized KD trees can
outperform the LSH algorithm by about an order of magnitude". Table 2 P 7 of the other paper appears to show speedups over LSH which are reasonably consistent for different values of k.
Note that this is not LSH vs kd-trees. This is LSH vs various clever tuned approximate search tree structures, where you typically search only the most promising parts of the tree, and not all of the parts of the tree that could possibly contain the closest point, and you search a number of different trees to get a decent probability of finding good points to compensate for this, tuning various parameters to get the fastest possible performance.

Is range tree widely used in spacial search problems?

I am looking for some data structures for range searching. I think range trees offer a good time complexity (but with some storage requirements).
However, it seems to me that other data structures, like KD-trees, are more discussed and recommended than range trees. Is this true? If so, why?
I would expect that it is because kd-trees can straightforwardly be extended to contains objects other than points. This gives it many applications in e.g. virtual worlds, where we want quick querying of triangles. Similar extensions of range trees are not straightforward, and in fact I've never seen any.
To give a quick recap: a kd-tree can preprocess a set of n points in d-space in O(n log n) time into a structure using O(n) space, such that any d-dimensional range query can be answered in O(n1-1/d + k) time, where k is the number of answers. A range trees takes O(n logd-1n) time to preprocess, takes O(n logd-1n) space, and can answer range queries in O(logd-1n + k) time.
The query time for a range tree is obviously a lot better than that of a kd-tree, if we're talking about 2- or 3-dimensional space. However, the kd-tree has several advantages. First of all, it always requires only linear storage. Secondly, it is always constructed in O(n log n) time. Third, if the dimensionality is very high, it will outperform a range tree unless your points sets are very large (although arguably, at this point a linear search will be almost as fast as a kd-tree).
I think another important point is that kd-trees are more well known by people than range trees. I'd never heard of a range tree before taking a course in computational geometry, but I'd heard of and worked with kd-trees before (albeit in a computer graphics setting).
EDIT: You ask what is a better data structure for 2D or 3D fixed radius search when you have millions of points. I really can't tell you! I'd be inclined to say a range tree will be faster if you perform many queries, but for 3D the construction will be slower by a factor of O(log n), and memory use may become an issue before speed does. I'd recommend integrating good implementations of both structures and simply testing what does a better job for your particular requirements.
For your needs (particle simulation in 2D or 3D), you want a data structure capable of all nearest neighbor queries. The cover tree is a data structure that is best suited for this task. I came across it while computing the nearest neighbors for kernel density estimation. This Wikipedia page explains the basic definition of the tree, and John Langford's page has a link to a C++ implementation.
The running time of a single query is O(c^12*logn), where c is the expansion constant of the dataset. This is an upper bound - in practice, the data structure performs faster than others. This paper shows that the running time of batch processing of all nearest neighbors (for all the data points), as needed for a particle simulation, is O(c^16*n), and this theoretical linear bound is also practical for your need. Construction time is O(nlogn) and storage is O(n).

How to efficiently find k-nearest neighbours in high-dimensional data?

So I have about 16,000 75-dimensional data points, and for each point I want to find its k nearest neighbours (using euclidean distance, currently k=2 if this makes it easiser)
My first thought was to use a kd-tree for this, but as it turns out they become rather inefficient as the number of dimension grows. In my sample implementation, its only slightly faster than exhaustive search.
My next idea would be using PCA (Principal Component Analysis) to reduce the number of dimensions, but I was wondering: Is there some clever algorithm or data structure to solve this exactly in reasonable time?
The Wikipedia article for kd-trees has a link to the ANN library:
ANN is a library written in C++, which
supports data structures and
algorithms for both exact and
approximate nearest neighbor searching
in arbitrarily high dimensions.
Based on our own experience, ANN
performs quite efficiently for point
sets ranging in size from thousands to
hundreds of thousands, and in
dimensions as high as 20. (For applications in significantly higher
dimensions, the results are rather
spotty, but you might try it anyway.)
As far as algorithm/data structures are concerned:
The library implements a number of
different data structures, based on
kd-trees and box-decomposition trees,
and employs a couple of different
search strategies.
I'd try it first directly and if that doesn't produce satisfactory results I'd use it with the data set after applying PCA/ICA (since it's quite unlikely your going to end up with few enough dimensions for a kd-tree to handle).
use a kd-tree
Unfortunately, in high dimensions this data structure suffers severely from the curse of dimensionality, which causes its search time to be comparable to the brute force search.
reduce the number of dimensions
Dimensionality reduction is a good approach, which offers a fair trade-off between accuracy and speed. You lose some information when you reduce your dimensions, but gain some speed.
By accuracy I mean finding the exact Nearest Neighbor (NN).
Principal Component Analysis(PCA) is a good idea when you want to reduce the dimensional space your data live on.
Is there some clever algorithm or data structure to solve this exactly in reasonable time?
Approximate nearest neighbor search (ANNS), where you are satisfied with finding a point that might not be the exact Nearest Neighbor, but rather a good approximation of it (that is the 4th for example NN to your query, while you are looking for the 1st NN).
That approach cost you accuracy, but increases performance significantly. Moreover, the probability of finding a good NN (close enough to the query) is relatively high.
You could read more about ANNS in the introduction our kd-GeRaF paper.
A good idea is to combine ANNS with dimensionality reduction.
Locality Sensitive Hashing (LSH) is a modern approach to solve the Nearest Neighbor problem in high dimensions. The key idea is that points that lie close to each other are hashed to the same bucket. So when a query arrives, it will be hashed to a bucket, where that bucket (and usually its neighboring ones) contain good NN candidates).
FALCONN is a good C++ implementation, which focuses in cosine similarity. Another good implementation is our DOLPHINN, which is a more general library.
You could conceivably use Morton Codes, but with 75 dimensions they're going to be huge. And if all you have is 16,000 data points, exhaustive search shouldn't take too long.
No reason to believe this is NP-complete. You're not really optimizing anything and I'd have a hard time figure out how to convert this to another NP-complete problem (I have Garey and Johnson on my shelf and can't find anything similar). Really, I'd just pursue more efficient methods of searching and sorting. If you have n observations, you have to calculate n x n distances right up front. Then for every observation, you need to pick out the top k nearest neighbors. That's n squared for the distance calculation, n log (n) for the sort, but you have to do the sort n times (different for EVERY value of n). Messy, but still polynomial time to get your answers.
BK-Tree isn't such a bad thought. Take a look at Nick's Blog on Levenshtein Automata. While his focus is strings it should give you a spring board for other approaches. The other thing I can think of are R-Trees, however I don't know if they've been generalized for large dimensions. I can't say more than that since I neither have used them directly nor implemented them myself.
One very common implementation would be to sort the Nearest Neighbours array that you have computed for each data point.
As sorting the entire array can be very expensive, you can use methods like indirect sorting, example Numpy.argpartition in Python Numpy library to sort only the closest K values you are interested in. No need to sort the entire array.
#Grembo's answer above should be reduced significantly. as you only need K nearest Values. and there is no need to sort the entire distances from each point.
If you just need K neighbours this method will work very well reducing your computational cost, and time complexity.
if you need sorted K neighbours, sort the output again
see
Documentation for argpartition

Resources