similarity between non-colored images - image

I want to find percentage similarity between uncolored images. Specifically, I want to compare my own drawing with an image. Here's an example image:
I don't have any knowledge about image-processing. What algorithms can be used to achieve my goal? Any guidance would be appreciated.

If your images are both black and white, you could compute the Hausdorff distance. In simple words, each black pixel is a point. For each point of image A, you compute the closest point of image B. You get a list of distances. The Hausdorff distance, is the greatest value of this list. The smaller it is, the more similar your images are.
You will have to compute this for several relative position/angle/aspect ratio between your two images, in order to find the position that matches the best.
You can extend this method to any non B&W image by computing the edges.

Related

Applying Delaunay Triangulation on RGB channels instead of final image

First I applied Delaunay Triangulation on an image with 3000 triangles. I measured similarity (SSIM) to original image as 0.75. (The higher value more similar)
Then I applied Delaunay Triangulation on the image's RGB channels separately as 1000 triangles each. Then I combined 3 images and formed the final image. Then I measured similarity of this (SSIM) to original image as 0.65. (The higher value more similar)
In both cases; points chosen randomly, median value of pixels containing triangles choosen as color of the triangle
I did lots of trials but none of the trials showed better results.
Isn't this weird? Think about it. I just use 1000 random triangles on one layer. Then 1000 more on second layer. Then 1000 more on third layer. When these are put on top of it, it should create more than 3000 unique polygons compared to final image triangulation. Because they do not coincide.
a) What can be the reason behind this?
b) What advantages can I obtain when I apply delaunay triangulation on RGB channels separately instead of applying it on image itself? It is obvious I can not get better similarity. But maybe Storage wise can I get better? Maybe in other areas? What can they be?
When the triangles in each layer don't coincide, it creates a low-pass filtering effect in brightness, because the three triangles that contribute to a pixel's brightness are larger than the single triangle you get in the other case.
It's hard to suggest any 'advantages' to either approach, since we don't really know why you are doing this in the first place.
If you want better similarity, though, then you have to pick better points. I would suggest making the probability of selecting a point proportional to the magnitude of the gradient at that point.

How to I create a distance image for a chamfer algorithm?

I think I know how the chamfer algorithm works:
Basicly take an images where every pixels greyscale is its own distance to the next closest edge pixel.
Then take an reference image (20px*30px) and try every possible position on the distance image for an image ( (100px*100px) ((100-20)*(100-30))=5600 positions )
Save the sum of all pixel whith their equivalent pixel on the refernce image and then you take the best match.
The problem is I have no idea how to turn the edge image (feature image) and turn it into the distance image.
I know I could just create a giant loop but that seams very inefficient. So whats an efficient way to turn the feature image into the distance image?

How do I find average-distance transform of a binary image?

The distance transform provides the distance of each pixel from the nearest boundary/contour/background pixel. I don't want closest distance, but I want to get some sort of average measure of the pixel's distance from the boundary/contour in all directions. Any suggestions for computing this distance transform would be appreciated. If there any existing algorithms and/or efficient C++ code available to compute such distance transform, that would be wonderful too.
If you have a binary image of the contours, then you can calculate the number of boundary pixels around each pixel within some windows (using e.g. the integral image, or cv::blur). This would give you something like what you want.
You might be able to combine that with normalizing the distance transform for average distances.
If you want the "average measure of the pixel's distance from the boundary/contour in all directions", then I am afraid that you have to extract the contour and for each pixel inside the pattern, you have to compute the average distance with the pixels belonging to the contour.
An heuristic for a "rough" approximation, would be to compute many distance maps using sources points (they could be the pattern extremities), and for each pixel inside the pattern, then you compute the sum of all distances from the distance maps. But to have the exact measure, you would have to compute as many distance maps as pixels belonging to the contour. But if an approximation is "okay", then this will speed up the processing.

Matlab - How to measure the dispersion of black in a binary image?

I am comparing RGB images of small colored granules spilled randomly on a white backdrop. My current method involves importing the image into Matlab, converting to a binary image, setting a threshold and forcing all pixels above it to white. Next, I am calculating the percentage of the pixels that are black. In comparing the images to one another, the measurement of % black pixels is great; however, it does not take into account how well the granules are dispersed. Although the % black from two different images may be identical, the images may be far from being alike. For example, assume I have two images to compare. Both show a % black pixels of 15%. In one picture, the black pixels are randomly distributed throughout the image. In the other, a clump of black pixels are in one corner and are very sparse in the rest of the image.
What can I use in Matlab to numerically quantify how "spread out" the black pixels are for the purpose of comparing the two images?
I haven't been able to wrap my brain around this one yet, and need some help. Your thoughts/answers are most appreciated.
Found an answer to a very similar problem -> https://stats.stackexchange.com/a/13274
Basically, you would use the average distance from a central point to every black pixel as a measure of dispersion.
My idea is based upon the mean free path ()used in ideal gad theory / thermodynamics)
First, you must separate your foreground objects, using something like bwconncomp.
The mean free path is calculated by the mean distance between the centers of your regions. So for n regions, you take all n/2*(n-1) pairs, calculate all the distances and average them. If the mean distance is big, your particles are well spread. If it is small, your objects are close together.
You may want to multiply the resulting mean by n and divide it by the edge length to get a dimensionless number. (Independent of your image size and independent of the number of particles)

Algorithm to Calculate Symmetry of Points

Given a set of 2D points, I want to calculate a measure of how horizontally symmetrical and vertically symmetrical those points are.
Alternatively, for each set of points I will also have a rasterised image of the lines between those points, so is there any way to calculate a measure of symmetry for images?
BTW, this is for use in a feature vector that will be presented to a neural network.
Clarification
The image on the left is 'horizontally' symmetrical. If we imagine a vertical line running down the middle of it, the left and right parts are symmetrical. Likewise, the image on the right is 'vertically' symmetrical, if you imagine a horizontal line running across its center.
What I want is a measure of just how horizontally symmetrical they are, and another of just how vertically symmetrical they are.
This is just a guideline / idea, you'll need to work out the details:
To detect symmetry with respect to horizontal reflection:
reflect the image horizontally
pad the original (unreflected) image horizontally on both sides
compute the correlation of the padded and the reflected images
The position of the maximum in the result of the correlation will give you the location of the axis of symmetry. The value of the maximum will give you a measure of the symmetry, provided you do a suitable normalization first.
This will only work if your images are "symmetric enough", and it works for images only, not sets of points. But you can create an image from a set of points too.
Leonidas J. Guibas from Stanford University talked about it in ETVC'08.
Detection of Symmetries and Repeated Patterns in 3D Point Cloud Data.

Resources