K-Means for diagonal clusters - algorithm

I currently have 2 clusters which essentially lie along 2 lines on a 3D surface:
I've tried a simple kmeans algorithm which yielded the above separation. (Big red dots are the means)
I have tried clustering with soft k-means with different variances for each mean along the 3 dimensions. However, it also failed to model this, presumably as it cannot fit the shape of a diagonal Gaussian.
Is there another algorithm that can take into account that the data is diagonal? Alternatively is there a way to essentially "rotate" the data such that soft k-means could be made to work?

K-means is not well prepared for correlations.
But these clusters look reasonably Gaussian to me, you should try Gaussian Mixture Modeling instead.

Related

Algorithm to cluster 2D points into bounding-box decomposition

I'm looking for an algorithm that takes the unstructured 2D point set as illustrated above and gives me a decomposition into bounding boxes as shown below. The bounding boxes may overlap, but the algorithm should nevertheless try to find a tight fit (it does not need to be the best one possible, but a good one).
I already tried to work with K-Means but that doesn't give me useful results as I'd need to know already how many clusters I need.
There are several approaches to achieve this. The one I'd use is to apply the RANSAC algorithm in order to sequentially fit Oriented Bounding Boxes (OBB) to the data. By adopting this approach you can even improve the sample selection by computing a constrained Delaunay triangulation on your data, thus discarding bigger edges when fitting the OBB.
Alternatively, you can apply the Alpha-shape algorithm on the entire set of points and start decomposing in smaller shapes, but this requires the definition of an optimal alpha value which is not trivial.

what is the best clustering for 3d points?

There are about 10 points in 3d space. The goal is to find the number of clusters and it's position. What is the best clustering method for it?
I have seen Complete_Linkage and DBSCAN clustering. Which one is more efficiently?
You can try a space filling curve for example translate the co-ordinate to a binary and interleave it. Treat it as base-4 number. Then sort the numbers.
With 10 points, efficiency is your least concern.
DBSCAN is meant for larger data. Hierarchical clustering is the way to go with such tiny data.

Can k-means clustering do classification?

I want to know whether the k-means clustering algorithm can do classification?
If I have done a simple k-means clustering .
Assume I have many data , I use k-means clusterings, then get 2 clusters A, B. and the centroid calculating method is Euclidean distance.
Cluster A at left side.
Cluster B at right side.
So, if I have one new data. What should I do?
Run k-means clustering algorithm again, and can get which cluster does the new data belong to?
Record the last centroid and use Euclidean distance to calculating to decide the new data belong to?
other method?
The simplest method of course is 2., assign each object to the closest centroid (technically, use sum-of-squares, not Euclidean distance; this is more correct for k-means, and saves you a sqrt computation).
Method 1. is fragile, as k-means may give you a completely different solution; in particular if it didn't fit your data well in the first place (e.g. too high dimensional, clusters of too different size, too many clusters, ...)
However, the following method may be even more reasonable:
3. Train an actual classifier.
Yes, you can use k-means to produce an initial partitioning, then assume that the k-means partitions could be reasonable classes (you really should validate this at some point though), and then continue as you would if the data would have been user-labeled.
I.e. run k-means, train a SVM on the resulting clusters. Then use SVM for classification.
k-NN classification, or even assigning each object to the nearest cluster center (option 1) can be seen as very simple classifiers. The latter is a 1NN classifier, "trained" on the cluster centroids only.
Yes, we can do classification.
I wouldn't say the algorithm itself (like #1) is particularly well-suited to classifying points, as incorporating data to be classified into your training data tends to be frowned upon (unless you have a real-time system, but I think elaborating on this would get a bit far from the point).
To classify a new point, simply calculate the Euclidean distance to each cluster centroid to determine the closest one, then classify it under that cluster.
There are data structures that allows you to more efficiently determine the closest centroid (like a kd-tree), but the above is the basic idea.
If you've already done k-means clustering on your data to get two clusters, then you could use k Nearest Neighbors on the new data point to find out which class it belongs to.
Here another method:
I saw it on "The Elements of Statistical Learning". I'll change the notation a little bit. Let C be the number of classes and K the number of clusters. Now, follow these steps:
Apply K-means clustering to the training data in each class seperately, using K clusters per class.
Assign a class label to each of the C*K clusters.
Classify observation x to the class of the closest cluster.
It seems like a nice approach for classification that reduces data observations by using clusters.
If you are doing real-time analysis where you want to recognize new conditions during use (or adapt to a changing system), then you can choose some radius around the centroids to decide whether a new point starts a new cluster or should be included in an existing one. (That's a common need in monitoring of plant data, for instance, where it may take years after installation before some operating conditions occur.) If real-time monitoring is your case, check RTEFC or RTMAC, which are efficient, simple real-time variants of K-means. RTEFC in particular, which is non-iterative. See http://gregstanleyandassociates.com/whitepapers/BDAC/Clustering/clustering.htm
Yes, you can use that for classification. If you've decided you have collected enough data for all possible cases, you can stop updating the clusters, and just classify new points based on the nearest centroid. As in any real-time method, there will be sensitivity to outliers - e.g., a caused by sensor error or failure when using sensor data. If you create new clusters, outliers could be considered legitimate if one purpose of the clustering is identify faults in the sensors, although that the most useful when you can do some labeling of clusters.
You are confusing the concepts of clustering and classification. When you have labeled data, you already know how the data is clustered according to the labels and there is no point in clustering the data unless if you want to find out how well your features can discriminate the classes.
If you run the k-means algorithm to find the centroid of each class and then use the distances from the centroids to classify a new data point, you in fact implement a form of the linear discriminant analysis algorithm assuming the same multiple-of-identity covariance matrix for all classes.
After k-means Clustering algorithm converges, it can be used for classification, with few labeled exemplars/training data.
It is a very common approach when the number of training instances(data) with labels are very limited due to high cost of labeling.

Demons algorithm for image registration (for dummies)

I was trying to make a application that compares the difference between 2 images in java with opencv. After trying various approaches I came across the algorithm called Demons algorithm.
To me it seems to give the difference of images by some transformation on each place. But I couldn't understand it since the references I found were too complex for me.
Even the demons algorithm does not do what I need I'm interested in learning it.
Can any one explain simply what happens in the demons algorithm and how to write a simple code to use that algorithm on 2 images.
I can give you an overview of general algorithms for deformable image registration, demons is one of them
There are 3 components of the algorithm, a similarity metric, a transformation model and an optimization algorithm.
A similarity metric is used to compute pixel based / patch based similarity between pixels/patches. Common similarity measures are SSD, normalized cross correlation for mono-modal images while information theoretic measures like mutual information are used in the case of multi-modal image registration.
In the case of deformable registration, they generally have a regular grid super-imposed over the image and the grid is deformed by solving an optimization problem which is formulated such that the similarity metric and the smoothness penalty imposed over the transformation is minimized. In deformable registration, once there are deformations over the grid, the final transformation at the pixel level is computed using a B-Spine interpolation of the grid at the pixel level so that the transformation is smooth and continuous.
There are 2 general approaches towards solving the optimization problem, some people use discrete optimization and solve it as a MRF optimization problem while some people use gradient descent, I think demons uses gradient descent.
In case of MRF based approaches, the unary cost is the cost for deforming each node in grid and it is the similarity computed between patches, the pairwise cost which imposes the smoothness of the grid, is generally a potts/truncated quadratic potential which ensures that neighboring nodes in the grid have almost the same displacement. Once you have the unary and pairwise cost, you feed it to a MRF optimization algorithm and get the displacements at the grid level, then you use a B-Spline interpolation to compute pixel level displacement. This process is repeated in a coarse to fine fashion over several scales and also the algorithm is run many times at each scale (reducing the displacement at each node every time).
In case of gradient descent based methods, they formulate the problem with the similarity metric and the grid transformation computed over the image and then compute the gradient of the energy function which they have formulated. The energy function is minimized using iterative gradient descent, however these approaches can get stuck in a local minima and are quite slow.
Some popular methods are DROP, Elastix, itk provides some tools
If you want to know more about algorithms related to deformable image registration, I will recommend you to take a look to FAIR( guide book), FAIR is a toolbox for Matlab so you will have examples to understand the theory.
http://www.cas.mcmaster.ca/~modersit/FAIR/
Then if you want to specifically see some demon example,, here you have this other toolbox:
http://www.mathworks.es/matlabcentral/fileexchange/21451-multimodality-non-rigid-demon-algorithm-image-registration

A density based clustering library that takes distance matrix as input

Need help with finding an open/free density based clustering library that takes a distance matrix as input and returns clusters with each element within it maximum "x" distance away from each of the other elements in the clusters (basically returning clusters with specified density).
I checked out the DBSCAN algorithm, it seems to suit my needs. Any clean implementations of DBSCAN that you might no off, which can take off with a pre-computed distance matrix and output clusters with the desired density?
Your inputs will be really useful.
ELKI (at http://elki.dbs.ifi.lmu.de/ ) can load external distance matrixes, either in a binary or an Ascii format and then run distance-based clustering algorithms on it.
Certain algorithms such as k-means cannot work however, as these rely on the distance to the /mean/, which is obviously not precomputed. But e.g. DBSCAN and OPTICS work fine with precomputed distances.
I haven't tried it out yet, but I'm looking for something similar and came across this python implementation of DBSCAN:
http://scikit-learn.org/dev/auto_examples/cluster/plot_dbscan.html#example-cluster-plot-dbscan-py
Matlab file exchange has an implementation which is straightforward to adapt to precomputed matrices. Just remove the call to pdist1 outside the function in your code.

Resources