The question is about KNN algorithm for classification - the class labels of training samples are discrete.
Suppose that the training set has n points that are identical to the new pattern which we are about to classify, that is the distances from these points to new observation are zero (or <epsilon). It may happen that these identical training points have different class labels. Now suppose that n < K and there are some other training points which are the part of nearest neighbors collection but have non-zero distances to the new observation. How do we assign the class label to new point in this case?
There are few possibilities such as:
consider all K (or more if there are ties with the worst nearest neighbor) neighbors and do majority voting
ignore the neighbors with non-zero distances if there are "clones" of the new point in training data and take the majority vote only over the clones
same as 2. but assign the class with the highest prior probability in the training data (among clones)
...
Any ideas? (references would be appreciated as well)
Each of proposed methods will work in some problems, and in some they won't. In general, there is no need to actually think about such border cases and simply use the default behaviour (option "1" from your question). In fact, if border cases of any classification algorithm becomes the problem it is a signal of at least one of:
bad problem definition,
bad data representation,
bad data preprocessing,
bad model used.
From the theoretical point of view nothing changes if some points are exactly in the place of your training data. The only difference would be, if you have consistent training set (in the sense, that duplicates with different labels do not occur in the training data) and 100% correct (each label is a perfect labeling fot this point), then it would be reasonable to add an if clausule that answers according to the label of the point. But in reallity it is rarely the case.
Related
The problem I am working on requires me to assign a fixed number of points from one series to the same number of points from another series. Think of it as assigning predicted values to corresponding experimental values. For this case, assume each point belongs to R^2, and we are only concerned about the average Euclidean distance between the resultant matchings.
One way to do this is to greedily assign the points with the least possible distance, which does give one of the lowest average distance, but we cannot be sure if it's the absolute best. Besides, points left to be assigned at the end end up being connected to really far points as no close alternatives are left.
Another method to do this is via simulated annealing or similar methods that find the global minimum, but on a sample set of 100 points generated randomly, this performs significantly worse than the greedy approach.
For some more context on the problem: I am trying to create an NMR assignment algorithm that predicts chemical shift values for a given structure and assigns them to experimental values obtained. I am working with a 1H-15N HSQC plot which is just a 2D graph of points. I would also like to hear if someone has come across a similar problem that was solved using machine learning techniques. As far as I see it, ML is good for classification and regression but not really for such assignment type problems.
For a research I'm working on I'm trying to find a satisfying heuristic that is based on Manhattan distance which can work with any problem and domain as an input. Which is also known as domain-independent heuristic.
For now, I know how to apply Manhattan distance on a grid based problems.
Can someone give a tip how to generalize it to work on every domain and problem and not just grid based problems?
The generalization of Manhattan distance is simple. It is a metric which defines the distance between two multi-dimensional points as the sum of the distances along each dimension:
md(A, B) = dist(a1, b1) + dist(a2, b2) + . . .
The distances along each dimension are assumed to be simple to calculate. For numbers, the distance is the absolute value of the difference between the values.
This can be extended to other domains as well. For instance, the distance between two strings could be taken as the Levenshtein distance -- and that would prove to be an interesting metric in conjunction with other dimensions.
The manhattan distance heuristic is an attempt to measure the minimum number of steps required to find a path to the goal state. The closer you get to the actual number of steps, the fewer nodes have to be expanded during search, where at the extreme with a perfect heuristic, you only expand nodes that are guaranteed to be on the goal path.
For a more academic approach to generalizing this idea, you want to search around for domain independent heuristics; there was a lot of research done on this in the late 1990s early 2000s although even today, a small amount of domain knowledge can usually get you much better results. That being said, there are some good places to start:
delete relaxation: the expand function probably contains some restrictions, remove one or more of those restrictions and you'll end up with a much easier problem, one that can probably be solved in real time and you'll and use the value generated by that relaxed problem as the heuristic value. e.g. in the sliding tile puzzle, delete the constraint that a piece cannot move on top of other pieces and you end up with the manhattan distance, relax that a piece can only move to adjacent squares and you end up with the hamming distance heuristic.
abstraction: mapping every state in the real search to a smaller abstract state space that you can fully evaluate. Pattern databases are a very popular tool in this area.
critical paths: when you know you must pass through specific states (in either the real state space or an abstract state space) you can perform multiple searches between only the critical points to cut down greatly the number of nodes you would have to search in the full state space
landmarks: very accurate heuristics at the cost of typically high computation time. landmarks are specific locations in which you precompute the distance to every possible other state from (typically 5-25 landmarks are used depending on graph size) and then you compute the lower bound possible distance using those precomputed values when evaluating each node.
There are a few other classes of domain independent heuristics, but these are the most popular and widely used in classical planning applications.
I have any number of points on an imaginary 2D surface. I also have a grid on the same surface with points at regular intervals along the X and Y access. My task is to map each point to the nearest grid point.
The code is straight forward enough until there are a shortage of grid points. The code I've been developing finds the closest grid point, displaying an already mapped point if the distance will be shorter for the current point.
I then added a second step that compares each mapped point to another and, if swapping the mapping with another point produces a smaller sum of the total mapped distance of both points, I swap them.
This last step seems important as it reduces the number crossed map lines. (This would be used to map points on a plate to a grid on another plate, with pins connecting the two, and lines that don't cross seem to have a higher chance that the pins would not make contact.)
Questions:
Can anyone comment on my thinking that if the image above were truly optimized, (that is, the mapped points--overall--would have the smallest total distance), then none of the lines were cross?
And has anyone seen any existing algorithms to help with this. I've searched but came up with nothing.
The problem could be approached as a variation of the Assignment Problem, with the "agents" being the grid squares and the points being the "tasks", (or vice versa) with the distance between them being the "cost" for that agent-task combination. You could solve with the Hungarian algorithm.
To handle the fact that there are more grid squares than points, find a bounding box for the possible grid squares you want to consider and add dummy points that have a cost of 0 associated with all grid squares.
The Hungarian algorithm is O(n3), perhaps your approach is already good enough.
See also:
How to find the optimal mapping between two sets?
How to optimize assignment of tasks to agents with these constraints?
If I understand your main concern correctly, minimising total length of line segments, the algorithm you used does not find the best mapping and it is clear in your image. e.g. when two line segments cross each other, simple mathematic says that if you rearrange their endpoints such that they do not cross, it provides a better total sum. You can use this simple approach (rearranging crossed items) to get better approximation to the optimum, you should apply swapping for more somehow many iterations.
In the following picture you can see why crossing has longer length than non crossing (first question) and also why by swapping once there still exists crossing edges (second question and w.r.t. Comments), I just drew one sample, in fact one may need many iterations of swapping to get non crossed result.
This is a heuristic algorithm certainly not optimum but I expect to be very good and efficient and simple to implement.
Problem Statement:
I have the following problem:
There are more than a billion points in 3D space. The goal is to find the top N points which has largest number of neighbors within given distance R. Another condition is that the distance between any two points of those top N points must be greater than R. The distribution of those points are not uniform. It is very common that certain regions of the space contain a lot of points.
Goal:
To find an algorithm that can scale well to many processors and has a small memory requirement.
Thoughts:
Normal spatial decomposition is not sufficient for this kind of problem due to the non-uniform distribution. irregular spatial decomposition that evenly divide the number of points may help us the problem. I will really appreciate that if someone can shed some lights on how to solve this problem.
Use an Octree. For 3D data with a limited value domain that scales very well to huge data sets.
Many of the aforementioned methods such as locality sensitive hashing are approximate versions designed for much higher dimensionality where you can't split sensibly anymore.
Splitting at each level into 8 bins (2^d for d=3) works very well. And since you can stop when there are too few points in a cell, and build a deeper tree where there are a lot of points that should fit your requirements quite well.
For more details, see Wikipedia:
https://en.wikipedia.org/wiki/Octree
Alternatively, you could try to build an R-tree. But the R-tree tries to balance, making it harder to find the most dense areas. For your particular task, this drawback of the Octree is actually helpful! The R-tree puts a lot of effort into keeping the tree depth equal everywhere, so that each point can be found at approximately the same time. However, you are only interested in the dense areas, which will be found on the longest paths in the Octree without even having to look at the actual points yet!
I don't have a definite answer for you, but I have a suggestion for an approach that might yield a solution.
I think it's worth investigating locality-sensitive hashing. I think dividing the points evenly and then applying this kind of LSH to each set should be readily parallelisable. If you design your hashing algorithm such that the bucket size is defined in terms of R, it seems likely that for a given set of points divided into buckets, the points satisfying your criteria are likely to exist in the fullest buckets.
Having performed this locally, perhaps you can apply some kind of map-reduce-style strategy to combine spatial buckets from different parallel runs of the LSH algorithm in a step-wise manner, making use of the fact that you can begin to exclude parts of your problem space by discounting entire buckets. Obviously you'll have to be careful about edge cases that span different buckets, but I suspect that at each stage of merging, you could apply different bucket sizes/offsets such that you remove this effect (e.g. perform merging spatially equivalent buckets, as well as adjacent buckets). I believe this method could be used to keep memory requirements small (i.e. you shouldn't need to store much more than the points themselves at any given moment, and you are always operating on small(ish) subsets).
If you're looking for some kind of heuristic then I think this result will immediately yield something resembling a "good" solution - i.e. it will give you a small number of probable points which you can check satisfy your criteria. If you are looking for an exact answer, then you are going to have to apply some other methods to trim the search space as you begin to merge parallel buckets.
Another thought I had was that this could relate to finding the metric k-center. It's definitely not the exact same problem, but perhaps some of the methods used in solving that are applicable in this case. The problem is that this assumes you have a metric space in which computing the distance metric is possible - in your case, however, the presence of a billion points makes it undesirable and difficult to perform any kind of global traversal (e.g. sorting of the distances between points). As I said, just a thought, and perhaps a source of further inspiration.
Here are some possible parts of a solution.
There are various choices at each stage,
which will depend on Ncluster, on how fast the data changes,
and on what you want to do with the means.
3 steps: quantize, box, K-means.
1) quantize: reduce the input XYZ coordinates to say 8 bits each,
by taking 2^8 percentiles of X,Y,Z separately.
This will speed up the whole flow without much loss of detail.
You could sort all 1G points, or just a random 1M,
to get 8-bit x0 < x1 < ... x256, y0 < y1 < ... y256, z0 < z1 < ... z256
with 2^(30-8) points in each range.
To map float X -> 8 bit x, unrolled binary search is fast —
see Bentley, Pearls p. 95.
Added: Kd trees
split any point cloud into different-sized boxes, each with ~ Leafsize points —
much better than splitting X Y Z as above.
But afaik you'd have to roll your own Kd tree code
to split only the first say 16M boxes, and keep counts only, not the points.
2) box: count the number of points in each 3d box,
[xj .. xj+1, yj .. yj+1, zj .. zj+1].
The average box will have 2^(30-3*8) points;
the distribution will depend on how clumpy the data is.
If some boxes are too big or get too many points, you could
a) split them into 8,
b) track the centre of the points in each box,
otherwide just take box midpoints.
3)
K-means clustering
on the 2^(3*8) box centres.
(Google parallel "k means" -> 121k hits.)
This depends strongly on K aka Ncluster, also on your radius R.
A rough approach would be to grow a
heap
of the say 27*Ncluster boxes with the most points,
then take the biggest ones subject to your Radius constraint.
(I like to start with a
Minimum spanning tree,
then remove the K-1 longest links to get K clusters.)
See also
Color quantization .
I'd make Nbit, here 8, a parameter from the beginning.
What is your Ncluster ?
Added: if your points are moving in time, see
collision-detection-of-huge-number-of-circles on SO.
I would also suggest to use an octree. The OctoMap framework is very good at dealing with huge 3D point clouds. It does not store all the points directly, but updates the occupancy density of every node (aka 3D box).
After the tree is built, you can use a simple iterator to find the node with the highest density. If you would like to model the point density or distribution inside the nodes, the OctoMap is very easy to adopt.
Here you can see how it was extended to model the point distribution using a planar model.
Just an idea. Create a graph with given points and edges between points when distance < R.
Creation of this kind of graph is similar to spatial decomposition. Your questions can be answered with local search in graph. First are vertices with max degree, second is finding of maximal unconnected set of max degree vertices.
I think creation of graph and search can be made parallel. This approach can have large memory requirement. Splitting domain and working with graphs for smaller volumes can reduce memory need.
I need to evaluate if two sets of 3d points are the same (ignoring translations and rotations) by finding and comparing a proper geometric hash. I did some paper research on geometric hashing techniques, and I found a couple of algorithms, that however tend to be complicated by "vision requirements" (eg. 2d to 3d, occlusions, shadows, etc).
Moreover, I would love that, if the two geometries are slightly different, the hashes are also not very different.
Does anybody know some algorithm that fits my need, and can provide some link for further study?
Thanks
Your first thought may be trying to find the rotation that maps one object to another but this a very very complex topic... and is not actually necessary! You're not asking how to best match the two, you're just asking if they are the same or not.
Characterize your model by a list of all interpoint distances. Sort the list by that distance. Now compare the list for each object. They should be identical, since interpoint distances are not affected by translation or rotation.
Three issues:
1) What if the number of points is large, that's a large list of pairs (N*(N-1)/2). In this case you may elect to keep only the longest ones, or even better, keep the 1 or 2 longest ones for each vertex so that every part of your model has some contribution. Dropping information like this however changes the problem to be probabilistic and not deterministic.
2) This only uses vertices to define the shape, not edges. This may be fine (and in practice will be) but if you expect to have figures with identical vertices but different connecting edges. If so, test for the vertex-similarity first. If that passes, then assign a unique labeling to each vertex by using that sorted distance. The longest edge has two vertices. For each of THOSE vertices, find the vertex with the longest (remaining) edge. Label the first vertex 0 and the next vertex 1. Repeat for other vertices in order, and you'll have assigned tags which are shift and rotation independent. Now you can compare edge topologies exactly (check that for every edge in object 1 between two vertices, there's a corresponding edge between the same two vertices in object 2) Note: this starts getting really complex if you have multiple identical interpoint distances and therefore you need tiebreaker comparisons to make the assignments stable and unique.
3) There's a possibility that two figures have identical edge length populations but they aren't identical.. this is true when one object is the mirror image of the other. This is quite annoying to detect! One way to do it is to use four non-coplanar points (perhaps the ones labeled 0 to 3 from the previous step) and compare the "handedness" of the coordinate system they define. If the handedness doesn't match, the objects are mirror images.
Note the list-of-distances gives you easy rejection of non-identical objects. It also allows you to add "fuzzy" acceptance by allowing a certain amount of error in the orderings. Perhaps taking the root-mean-squared difference between the two lists as a "similarity measure" would work well.
Edit: Looks like your problem is a point cloud with no edges. Then the annoying problem of edge correspondence (#2) doesn't even apply and can be ignored! You still have to be careful of the mirror-image problem #3 though.
There a bunch of SIGGRAPH publications which may prove helpful to you.
e.g. "Global Non-Rigid Alignment of 3-D Scans" by Brown and Rusinkiewicz:
http://portal.acm.org/citation.cfm?id=1276404
A general search that can get you started:
http://scholar.google.com/scholar?q=siggraph+point+cloud+registration
spin images are one way to go about it.
Seems like a numerical optimisation problem to me. You want to find the parameters of the transform which transforms one set of points to as close as possible by the other. Define some sort of residual or "energy" which is minimised when the points are coincident, and chuck it at some least-squares optimiser or similar. If it manages to optimise the score to zero (or as near as can be expected given floating point error) then the points are the same.
Googling
least squares rotation translation
turns up quite a few papers building on this technique (e.g "Least-Squares Estimation of Transformation Parameters Between Two Point Patterns").
Update following comment below: If a one-to-one correspondence between the points isn't known (as assumed by the paper above), then you just need to make sure the score being minimised is independent of point ordering. For example, if you treat the points as small masses (finite radius spheres to avoid zero-distance blowup) and set out to minimise the total gravitational energy of the system by optimising the translation & rotation parameters, that should work.
If you want to estimate the rigid
transform between two similar
point clouds you can use the
well-established
Iterative Closest Point method. This method starts with a rough
estimate of the transformation and
then iteratively optimizes for the
transformation, by computing nearest
neighbors and minimizing an
associated cost function. It can be
efficiently implemented (even
realtime) and there are available
implementations available for
matlab, c++... This method has been
extended and has several variants,
including estimating non-rigid
deformations, if you are interested
in extensions you should look at
Computer graphics papers solving
scan registration problem, where
your problem is a crucial step. For
a starting point see the Wikipedia
page on Iterative Closest Point
which has several good external
links. Just a teaser image from a matlab implementation which was designed to match to point clouds:
(source: mathworks.com)
After aligning you could the final
error measure to say how similar the
two point clouds are, but this is
very much an adhoc solution, there
should be better one.
Using shape descriptors one can
compute fingerprints of shapes which
are often invariant under
translations/rotations. In most cases they are defined for meshes, and not point clouds, nevertheless there is a multitude of shape descriptors, so depending on your input and requirements you might find something useful. For this, you would want to look into the field of shape analysis, and probably this 2004 SIGGRAPH course presentation can give a feel of what people do to compute shape descriptors.
This is how I would do it:
Position the sets at the center of mass
Compute the inertia tensor. This gives you three coordinate axes. Rotate to them. [*]
Write down the list of points in a given order (for example, top to bottom, left to right) with your required precision.
Apply any algorithm you'd like for a resulting array.
To compare two sets, unless you need to store the hash results in advance, just apply your favorite comparison algorithm to the sets of points of step 3. This could be, for example, computing a distance between two sets.
I'm not sure if I can recommend you the algorithm for the step 4 since it appears that your requirements are contradictory. Anything called hashing usually has the property that a small change in input results in very different output. Anyway, now I've reduced the problem to an array of numbers, so you should be able to figure things out.
[*] If two or three of your axis coincide select coordinates by some other means, e.g. as the longest distance. But this is extremely rare for random points.
Maybe you should also read up on the RANSAC algorithm. It's commonly used for stitching together panorama images, which seems to be a bit similar to your problem, only in 2 dimensions. Just google for RANSAC, panorama and/or stitching to get a starting point.