writing a similarity function for images for clustering data - image

I know how to write a similarity function for data points in euclidean space (by taking the negative min sqaured error.) Now if I want to check my clustering algorithms on images how can I write a similarity function for data points in images? Do I base it on their RGB values or what? and how?

I think we need to clarify better some points:
Are you clustering only on color? So, take RGB values for pixels and apply your metric function (minimize sum of sq. error, or just calculate SAD - Sum of Absolute Differences).
Are you clustering on space basis (in an image)? In this case, you should take care of position, as you specified for euclidean space, just considering the image as your samples' domain. It's a 2D space anyway... 3D if you consider color information too (see next).
Are you looking for 3D information from image? (2D position + 1D color) It's the most probable case. Consider segmentation techniques if your image shows regular or well defined shapes, as first approach. If it fails, or you wanted a less hand tuned algorithm, consider reducing the 3D space of information to 2D or even 1D by doing PCA on data. By analyzing Principal Components you could drop off unuseful information from your collection and/or exploiting intrinsic data structure in some way.
The argument would need much more than a post to be solved, but I hope this could help a bit.

Related

Dividing the plane into regions of equal mass based on a density function

Given a "density" scalar field in the plane, how can I divide the plane into nice (low moment of inertia) regions so that each region contains a similar amount of "mass"?
That's not the best description of what my actual problem is, but it's the most concise phrasing I could think of.
I have a large map of a fictional world for use in a game. I have a pretty good idea of approximately how far one could walk in a day from any given point on this map, and this varies greatly based on the terrain etc. I would like to represent this information by dividing the map into regions, so that one day of walking could take you from any region to any of its neighboring regions. It doesn't have to be perfect, but it should be significantly better than simply dividing the map into a hexagonal grid (which is what many games do).
I had the idea that I could create a gray-scale image with the same dimensions as the map, where each pixel's color value represents how quickly one can travel through the pixel in the same place on the map. Well-maintained roads would be encoded as white pixels, and insurmountable cliffs would be encoded as black, or something like that.
My question is this: does anyone have an idea of how to use such a gray-scale image (the "density" scalar field) to generate my "grid" from the previous paragraph (regions of similar "mass")?
I've thought about using the gray-scale image as a discrete probability distribution, from which I can generate a bunch of coordinates, and then use some sort of clustering algorithm to create the regions, but a) the clustering algorithms would have to create clusters of a similar size, I think, for that idea to work, which I don't think they usually do, and b) I barely have any idea if any of that even makes sense, as I'm way out of my comfort zone here.
Sorry if this doesn't belong here, my idea has always been to solve it programatically somehow, so this seemed the most sensible place to ask.
UPDATE: Just thought I'd share the results I've gotten so far, trying out the second approach suggested by #samgak - recursively subdividing regions into boxes of similar mass, finding the center of mass of each region, and creating a voronoi diagram from those.
I'll keep tweaking, and maybe try to find a way to make it less grid-like (like in the upper right corner), but this worked way better than I expected!
Building upon #samgak's solution, if you don't want the grid-like structure, you can just add a small random perturbation to your centers. You can see below for example the difference I obtain:
without perturbation
adding some random perturbation
A couple of rough ideas:
You might be able to repurpose a color-quantization algorithm, which partitions color-space into regions with roughly the same number of pixels in them. You would have to do some kind of funny mapping where the darker the pixel in your map, the greater the number of pixels of a color corresponding to that pixel's location you create in a temporary image. Then you quantize that image into x number of colors and use their color values as co-ordinates for the centers of the regions in your map, and you could then create a voronoi diagram from these points to define your region boundaries.
Another approach (which is similar to how some color quantization algorithms work under the hood anyway) could be to recursively subdivide regions of your map into axis-aligned boxes by taking each rectangular region and choosing the optimal splitting line (x or y) and position to create 2 smaller rectangles of similar "mass". You would end up with a power of 2 count of rectangular regions, and you could get rid of the blockiness by taking the centre of mass of each rectangle (not simply the center of the bounding box) and creating a voronoi diagram from all the centre-points. This isn't guaranteed to create regions of exactly equal mass, but they should be roughly equal. The algorithm could be improved by allowing recursive splitting along lines of arbitrary orientation (or maybe a finite number of 8, 16, 32 etc possible orientations) but of course that makes it more complicated.

Finding correspondence of edges for image matching

I have a challenging problem to solve. The Figure shows green lines, that are derived from an image and the red lines are the edges derived from another image. Both the images are taken from the same camera, so the intrinsic parameters are same. Only, the exterior parameters are different, i.e. there is a slight rotation and translation while taking the 2nd image. As it can be seen in the figure, the two sets of lines are pretty close. My task is to find correspondence between the edges derived from the 1st image and the edges derived from the second image.
I have gone through a few sources, that mention taking corresponding the nearest line segment, by calculating Euclidean distances between the endpoints of an edge of image 1 to the edges of image 2. However, this method is not acceptable for my case, as there are edges in image 1, near to other edges in image 2 that are not corresponding, and this will lead to a huge number of mismatches.
After a bit of more research, few more sources referred to Hausdorff distance. I believe that this could really be a solution to my problem and the paper
"Rucklidge, William J. "Efficiently locating objects using the
Hausdorff distance." International Journal of Computer Vision 24.3
(1997): 251-270."
seemed to be really interesting.
If, I got it correct the paper formulated a function for calculating translation of model edges to image edges. However, while implementation in MATLAB, I'm completely lost, where to begin. I will be much obliged if I can be directed to a pseudocode of the same algorithm or MATLAB implementation of the same.
Additionally, I am aware of
"Apply Hausdorff distance to tile image classification" link
and
"Hausdorff regression"
However, still, I'm unsure how to minimise Hausdorff distance.
Note1: Computational cost is not of concern now, but faster algorithm is preferred
Note2: I am open to other algorithms and methods to solve this as long as there is a pseudocode available or an open implementation.
Have you considered MATLAB's image registration tools?
With imregister(https://www.mathworks.com/help/images/ref/imregister.html), you can just insert both images, 1 as reference, one as "moving" and it will register them together using an affine transform. The function call is just
[optimizer, metric] = imregconfig('monomodal');
output_registered = imregister(moving,fixed,'affine',optimizer,metric);
For better visualization, use the RegistrationEstimator command to open up a gui in which you can import the 2 images and play around with it to register your images. From there you can export code for future images.
Furthermore if you wish to account for non-rigid transforms there is imregdemons(https://www.mathworks.com/help/images/ref/imregdemons.html) which works much the same way.
You can compute the Hausdorff distance using Matlab's bwdist function. You would compute the distance transform of one image, evaluate it at the edge points of the other, and take the maximum value. (You can also take the sum instead, in which case it is called the chamfer distance.) For this problem you'll probably want the symmetric Hausdorff distance, so you would do the computation in both directions.
Both Hausdorff and chamfer distance measure the match quality of a particular alignment. To find the best registration you'll need to try multiple alignment transformations and evaluate them all looking for the best one. As suggested in another answer, you may find it easier to use registration existing tools than to write your own.

Algorithm for comparing pictures/shapes for uniqueness

Say that you have a grid where users draw pictures/shapes by clicking and coloring the boxes. Can you suggest any algorithm to compare these drawings according to originality ? I was thinking about comparing them according to the boxes they occupy but I am not sure if that is the best way. I hope I was clear. Thanks.
IMHO, the best choice would be to use mutual information as a metric. Since this is still a very abstract problem I am not sure about details of calculating it.
Let me elaborate on why mutual information is a good measure. Let us assume a image is made up of colors a,b,c and 4 (exactly four colors). And another image is exactly same, except a is replaced with e, b->f, c->g and d->h. If you use any other metrics (correlation for example), these two images seem dissimilar, but mutual information would show that these two images share exact same information (only coded differently).
How to calculate mutual information: First, you need to align the images (which is a tough problem, you can get reasonable solution by transforming the image in offsets, scaling and rotation). Once images are aligned, you have pixel-to-pixel relation. You can assume each pixel is independent and calculate I(X;Y) where X is pixel from first image and Y from second. This is the simple-most solution, but you can assume more complicate relations Eg: I(X1,...,Xk;Y1,...,Yk) where X1,...,Xk are adjacent pixels and Yis correspond to their counterparts.
You can use a special curve in math. Such a curve fills the space and traverse each point exactly once. Thus you can reduce the 2d complexity you have a problem to a 1d complexity. When you sort the points you can see the image in 1 dimension this makes it easer to apply a statistical algorithm to look for similarities. You can apply this to each color of the image.

Algorithms to normalize finger touch data (reduce the number of points)

I'm working on an app that lets users select regions by finger painting on top of a map. The points then get converted to a latitude/longitude and get uploaded to a server.
The touch screen is delivering way too many points to be uploaded over 3G. Even small regions can accumulate up to ~500 points.
I would like to smooth this touch data (approximate it within some tolerance). The accuracy of drawing does not really matter much as long as the general area of the region is the same.
Are there any well known algorithms to do this? Is this work for a Kalman filter?
There is the Ramer–Douglas–Peucker algorithm (wikipedia).
The purpose of the algorithm is, given
a curve composed of line segments, to
find a similar curve with fewer
points. The algorithm defines
'dissimilar' based on the maximum
distance between the original curve
and the simplified curve. The
simplified curve consists of a subset
of the points that defined the
original curve.
You probably don't need anything too exotic to dramatically cut down your data.
Consider something as simple as this:
Construct some sort of error metric. An easy one would be a normalized sum of the distances from the omitted points to the line that was approximating them. Decide what a tolerable error using this metric is.
Then starting from the first point construct the longest line segment that falls within the tolerable error range. Repeat this process until you have converted the entire path into a polyline.
This will not give you the globally optimal approximation but it will probably be good enough.
If you want the approximation to be more "curvey" you might consider using splines or bezier curves rather than straight line segments.
You want to subdivide the surface into a grid with a quadtree or a space-filling-curve. A sfc reduce the 2d complexity to a 1d complexity. You want to look for Nick's hilbert curve quadtree spatial index blog.
I was going to do something this in an app, but was intending on generating a path from the points on-the-fly. I was going to use a technique mentioned in this Point Sequence Interpolation thread

Find a similarity of two vector shapes

Looking for any information/algorithms relating to comparing vector graphics. E.g. say there two point collections or vector files with two almost identical figures. I want to determine that a first figure is about 90% similar to the second one.
A common way to test for similarity is with image moments. Moments are intrinsically translationally invariant, and if the objects you compare might be scaled or rotated you can use moments that are invariant to these transformations, such as Hu moments.
Most of the programs I know would require rasterized versions of the vector objects; but the moments could be calculated directly from the vector graphics using a Green's Theorem approach, or a more simplistic approach that just identifies unique (unordered) vertex configurations would be to convert the Hu moment integrals to sums over the vertices -- in a physics analogy replacing the continuous object with equal point masses at each vertex.
There is a paper on a tool called VISTO that sorts vector graphics images (using moments, I think), which should certainly be useful for more details.
You could search for fingerprint matching algorithms. Fingerprints are usually converted to a set of points with their relative location to each other, which makes it basically the same problem as yours.
You could transform it to a non-vector graphic and then apply standard image analysis techniques like SIFT points, etc.

Resources