There is a well known algorithm Surface Simplification Using Quadric Error Metrics for decimation of triangular meshes, which is valued both for high quality results and for its performance.
Unfortunately, by its nature the algorithm is sequential, since the main part of it is
Iteratively remove the pair (v1, v2) of least cost from the heap, contract this pair, and update the costs of all valid pairs involving v1.
and it cannot be efficiently parallelized, since the next pair to be removed from the heap depends in general from the processing of the current pair.
Is there a way to utilize all threads in a modern multi-core processor to get the result similar by quality to this algorithm but much faster for huge meshes containing millions of triangles?
Related
I'm trying to solve this exercise for this algorithm.
I've tried to research on multithreading but I couldn't come up with a solution.
Cache-oblivious traversal is not about complexity, it is about efficient use of the CPU cache.
The performance when traversing matrices is very dependent on the CPU cache. There can be orders of magnitude difference between two algorithms with identical complexity but with different cache access patterns.
It is a technique that can be used both in a single-threaded and a multi-threaded implementation.
It's basic idea is that you do not traverse the matrix line by line but quadrant by quadrant allowing the CPU to bring in the data from memory in its cache. Experiment with the size of your quadrant and you will see a huge improvement.
KNN is such a straightforward algorithm that's easy to implement:
# for each test datapoint in X_test:
# calculate its distance from every points in X_train
# find the top k most closest points
# take majority vote of the k neighbors and use that as prediction for this test data point
Yet I think the time complexity is not good enough. How is the algorithm optimized when it is implemented in reality? (like what trick or data structure it's using?)
The k-nearest neighbor algorithm differs from other learning methods because no
model is induced from the training examples. The data remains as they are; they
are simply stored in memory.
A genetic algorithm is combined with k-NN to improve performance. Another successful technique known as instance
selection is also proposed to face simultaneously, the efficient storage and noise of
k-NN. you can try this: when a new instance should be classified; instead of
involving all learning instances to retrieve the k-neighbors which will increase the
computing time, a selection of a smaller subset of instances is first performed.
you can also try:
Improving k-NN speed by reducing the number of training
documents
Improving k-NN by neighborhood size and similarity
function
Improving k-NN by advanced storage structures
What you describe is the brute force kNN calculation with O(size(X_test)*size(X_train)*d), where d is the number of dimensions in the feature vectors.
More efficient solution use spatial indexing to put an index on the X_train data. This typically reduces the individual lookups to O( log(size(X_train)) * d) or even O( log(size(X_train)) + d).
Common spatial indexes are:
kD-Trees (they are often used, but scale badly with 'd')
R-Trees, such as the RStarTree
Quadtrees (Usually not efficient for large 'd', but for example the PH-Tree works well with d=1000 and has excellent remove/insertion times (disclaimer, this is my own work))
BallTrees (I don't really know much about them)
CoverTrees (Very fast lookup for high 'd', but long build-up times
There are also the class of 'approximate' NN searches/queries. These trade correctness with speed, they may skip a few of the closest neighbors. You can find a performance comparison and numerous implementations in python here.
If you are looking for Java implementations of some of the spatial indexes above, have a look at my implementations.
I have implemented a K-nearest neighbor on the GPU using both pure CUDA and Thrust library function calls.
Euclidean distances are computed with a pure CUDA kernel. Then, Thrust sorting facilities (radix sort) are used to sort the distances in increasing order. Finally, the K first elements (i.e. the K nearest neighbors) are retrieved from the sorted vectors.
My implementation works well. However, sorting the entire euclidean distances matrix (sets can contain more than 250000 train samples) just to retrieve the K-nn seems non-optimal.
Therefore, I'm searching for a GPU algorithm implementation which allows to stop the sorting computations once the K smallest elements are found, or which performs an efficient K out of N sorting. It would indeed be faster for small K than sorting the entire matrix.
If such an implementation is not available, I would also be interested by advices to implement it efficiently in pure CUDA or Thrust. I was thinking to use a few threads per test samples to look for K nearest, each thread running to a part of the euclidean distances. I would maintain a buffer of size K in shared memory. I would run through the distances and insert the Knn in the shared memory vector. However, it would require some warp level synchronization and thread divergence.
Thank you for your help.
You are seeking an approach for the K-nearest neighbor problem consisting of two steps:
Finding the Euclidean distances between the elements;
Finding the first K elements providing the K smallest distances.
It seems that such an approach is already existing and has been implemented in
K.Kato and T.Hosino, "Solving k-Nearest Neighbor Problem on Multiple Graphics Processors"
and presented at the 2009 GTC Conference as
K.Kato and T.Hosino, "You Might Also Like: A Multi-GPU Recommendation System".
The approach solves the above two steps by
using the classical N-body approach developed in L.Nyland, M.Harris, J.Prins, "Fast N-body simulation with CUDA," In: GPU Gems III. NVIDIA (2007) 677–695 to calculate the Euclidean distances;
using the partial sorting technique, based on a parallel heapsort idea.
Again, as mentioned in my comment above, a better approach avoiding your "brute-force" one would be to use KD-trees.
I am trying to write a demo for an embedded processor, which is a multicore architecture and is very fast in floating point calculations. The problem is that the current hardware I have is the processor connected through an evaluation board where the DRAM to chip rate is somewhat limited, and the board to PC rate is very slow and inefficient.
Thus, when demonstrating big matrix multiplication, I can do, say, 128x128 matrices in a couple of milliseconds, but the I/O takes (lots of) seconds kills the demo.
So, I am looking for some kind of a calculation with higher complexity than n^3, the more the better (but preferably easy to program and to explain/understand) to make the computation part more dominant in the time budget, where the dataset is preferably bound to about 16KB per thread (core).
Any suggestion?
PS: I think it is very similar to this question in its essence.
You could generate large (256-bit) numbers and factor them; that's commonly used in "stress-test" tools. If you specifically want to exercise floating point computation, you can build a basic n-body simulator with a Runge-Kutta integrator and run that.
What you can do is
Declare a std::vector of int
populate it with N-1 to 0
Now keep using std::next_permutation repeatedly until they are sorted again i..e..next_permutation returns false.
With N integers this will need O(N !) calculations and also deterministic
PageRank may be a good fit. Articulated as a linear algebra problem, one repeatedly squares a certain floating-point matrix of controllable size until convergence. In the graphical metaphor, one "ripples" change coming into each node onto the other edges. Both treatments can be made parallel.
You could do a least trimmed squares fit. One use of this is to identify outliers in a data set. For example you could generate samples from some smooth function (a polynomial say) and add (large) noise to some of the samples, and then the problem is to find a subset H of the samples of a given size that minimises the sum of the squares of the residuals (for the polynomial fitted to the samples in H). Since there are a large number of such subsets, you have a lot of fits to do! There are approximate algorithms for this, for example here.
Well one way to go would be to implement brute-force solver for the Traveling Salesman problem in some M-space (with M > 1).
The brute-force solution is to just try every possible permutation and then calculate the total distance for each permutation, without any optimizations (including no dynamic programming tricks like memoization).
For N points, there are (N!) permutations (with a redundancy factor of at least (N-1), but remember, no optimizations). Each pair of points requires (M) subtractions, (M) multiplications and one square root operation to determine their pythagorean distance apart. Each permutation has (N-1) pairs of points to calculate and add to the total distance.
So order of computation is O(M((N+1)!)), whereas storage space is only O(N).
Also, this should not be either too hard, nor too intensive to parallelize across the cores, though it does take some overhead. (I can demonstrate, if needed).
Another idea might be to compute a fractal map. Basically, choose a grid of whatever dimensionality you want. Then, for each grid point, do the fractal iteration to get the value. Some points might require only a few iterations; I believe some will iterate forever (chaos; of course, this can't really happen when you have a finite number of floating-point numbers, but still). The ones that don't stop you'll have to "cut off" after a certain number of iterations... just make this preposterously high, and you should be able to demonstrate a high-quality fractal map.
Another benefit of this is that grid cells are processed completely independently, so you will never need to do communication (not even at boundaries, as in stencil computations, and definitely not O(pairwise) as in direct N-body simulations). You can usefully use O(gridcells) number of processors to parallelize this, although in practice you can probably get better utilization by using gridcells/factor processors and dynamically scheduling grid points to processors on an as-ready basis. The computation is basically all floating-point math.
Mandelbrot/Julia and Lyupanov come to mind as potential candidates, but any should do.
So I have about 16,000 75-dimensional data points, and for each point I want to find its k nearest neighbours (using euclidean distance, currently k=2 if this makes it easiser)
My first thought was to use a kd-tree for this, but as it turns out they become rather inefficient as the number of dimension grows. In my sample implementation, its only slightly faster than exhaustive search.
My next idea would be using PCA (Principal Component Analysis) to reduce the number of dimensions, but I was wondering: Is there some clever algorithm or data structure to solve this exactly in reasonable time?
The Wikipedia article for kd-trees has a link to the ANN library:
ANN is a library written in C++, which
supports data structures and
algorithms for both exact and
approximate nearest neighbor searching
in arbitrarily high dimensions.
Based on our own experience, ANN
performs quite efficiently for point
sets ranging in size from thousands to
hundreds of thousands, and in
dimensions as high as 20. (For applications in significantly higher
dimensions, the results are rather
spotty, but you might try it anyway.)
As far as algorithm/data structures are concerned:
The library implements a number of
different data structures, based on
kd-trees and box-decomposition trees,
and employs a couple of different
search strategies.
I'd try it first directly and if that doesn't produce satisfactory results I'd use it with the data set after applying PCA/ICA (since it's quite unlikely your going to end up with few enough dimensions for a kd-tree to handle).
use a kd-tree
Unfortunately, in high dimensions this data structure suffers severely from the curse of dimensionality, which causes its search time to be comparable to the brute force search.
reduce the number of dimensions
Dimensionality reduction is a good approach, which offers a fair trade-off between accuracy and speed. You lose some information when you reduce your dimensions, but gain some speed.
By accuracy I mean finding the exact Nearest Neighbor (NN).
Principal Component Analysis(PCA) is a good idea when you want to reduce the dimensional space your data live on.
Is there some clever algorithm or data structure to solve this exactly in reasonable time?
Approximate nearest neighbor search (ANNS), where you are satisfied with finding a point that might not be the exact Nearest Neighbor, but rather a good approximation of it (that is the 4th for example NN to your query, while you are looking for the 1st NN).
That approach cost you accuracy, but increases performance significantly. Moreover, the probability of finding a good NN (close enough to the query) is relatively high.
You could read more about ANNS in the introduction our kd-GeRaF paper.
A good idea is to combine ANNS with dimensionality reduction.
Locality Sensitive Hashing (LSH) is a modern approach to solve the Nearest Neighbor problem in high dimensions. The key idea is that points that lie close to each other are hashed to the same bucket. So when a query arrives, it will be hashed to a bucket, where that bucket (and usually its neighboring ones) contain good NN candidates).
FALCONN is a good C++ implementation, which focuses in cosine similarity. Another good implementation is our DOLPHINN, which is a more general library.
You could conceivably use Morton Codes, but with 75 dimensions they're going to be huge. And if all you have is 16,000 data points, exhaustive search shouldn't take too long.
No reason to believe this is NP-complete. You're not really optimizing anything and I'd have a hard time figure out how to convert this to another NP-complete problem (I have Garey and Johnson on my shelf and can't find anything similar). Really, I'd just pursue more efficient methods of searching and sorting. If you have n observations, you have to calculate n x n distances right up front. Then for every observation, you need to pick out the top k nearest neighbors. That's n squared for the distance calculation, n log (n) for the sort, but you have to do the sort n times (different for EVERY value of n). Messy, but still polynomial time to get your answers.
BK-Tree isn't such a bad thought. Take a look at Nick's Blog on Levenshtein Automata. While his focus is strings it should give you a spring board for other approaches. The other thing I can think of are R-Trees, however I don't know if they've been generalized for large dimensions. I can't say more than that since I neither have used them directly nor implemented them myself.
One very common implementation would be to sort the Nearest Neighbours array that you have computed for each data point.
As sorting the entire array can be very expensive, you can use methods like indirect sorting, example Numpy.argpartition in Python Numpy library to sort only the closest K values you are interested in. No need to sort the entire array.
#Grembo's answer above should be reduced significantly. as you only need K nearest Values. and there is no need to sort the entire distances from each point.
If you just need K neighbours this method will work very well reducing your computational cost, and time complexity.
if you need sorted K neighbours, sort the output again
see
Documentation for argpartition