I am trying to set up a database of images that can be used to compare
to a current image (So if the current image is equal, or almost equal
to the one being compared it'll give a match)
However to start this project off I want to just compare 2 images
using Matlab to see how the process works.
Does anyone know how I might compare say image1.jpg and image2.jpg to
see how closely related to each other they are? So basically if I was
to compare image1.jpg and image1.jpg the relationship should be 100%,
but comparing 2 different images might give me quite a close
relationship.
I hope that makes some sense!!!
Thanks,
Well, the method to use greatly depend on what you define as similar images. If for example you can guarantee that translations (moves in the x and y directions) are very small (no more than a few pixels), a simple RMS subtraction measure might do fine. If this is not the case, brute force template search methods might be an option. At the other end of the scale are advanced recognition techniques using morphological measures.
The first and simplest approach might look something like this:
errorMeasure = sqrt(sum(sum(sum((image1-image2).^2))))
This method simple takes the difference and finds the "energy" of the error.
Related
I have two binary image like this. I have a data set with lots of picture like at the bottom but with differents signs.
and
I would like to compare them in order to know if it's the same figure or not (especially inside the triangle). I took a look in Sift and Surf feature but it's doesn't work well on this type of picture (it find matchning point whereas the two picture are different,especially inside).
I also hear about SVM but i don't know if i have to implement it for this type of problem.
Do you have an idea ?
Thank you
I think you should not use SURF features on the binary image as you have already discarded a lot of information at that stage with your edge detector.
You could also use the Linear or Circle Hough Transform that in this case could tell you a lot about image differences.
If you wat to find 2 exactly identical images, simply use hash functions like md5.
But if you want to find related ( not exatcly identical) images, you are running in trouble ;). look for artificial neural network libs...
I am looking to compare a new image to a database of images, and then output the higher "similarity". The images I want to compare are similar, but the problem is though because they're not pixel by pixel equal. I've tried to use BoW (Bag Of Words) model already (I implemented it in Matlab, but I'm willing to learn openCV), as per recommendation, I tried various implementations without success, the best correct rate I got was 30%, which is something really low.
Let me show you what I am talking about: imgur gallery with 5 example images. I want to detect that the four initial images are equal, and the fifth one is different. I wouldn't mind only detecting that the ones with the same angle orientation are equal, though. (In my example 2, 3 and 4)
So, that being said, are there any better methods than BoW for that? Or perhaps BoW should be enough if I implemented in a different way?
Thanks in advance.
I would try some keypoint based approach using randomized trees. Has the advantage that point extraction is local and adapts to many sort of transformations (Like the ones your pictures show). The advantage of being local is that they are more robust against changes in illumination across the scene, occlusions, and so on.
Also, take a look at the SURF algorithm.
I am wondering if there is a pre-existing algorithm/library/framework to compare two images to see if one is a re-sized version of the other? The programming language doesn't matter at this stage.
If there is nothing out there, I'd need to write something up. What I have thought of so far:
(Expensive) Resize the larger to the smaller and compare pixel by pixel.
Better yet, just resize a few random "areas" on the picture and compare. If they match, convert more, etc...
Break the image into a number of rows and columns and do some sort of parity math on the color values.
The problem I see with the first two ideas especially, is that there are different ways to re-size a picture in the first place, so the math will likely not work out the same at all. Some re-sizing adds blur, etc....
If anyone could point me to some good literature on this subject, that would be great. My googling turns up mostly shareware applications which is not what I want.
The goal is to have this running in the back of a webserver.
The best approach depends on the characteristics of the images you are comparing, what percentage of probability it is that the images are the same, and when they are different, are they typically off by a lot or could it be as minute as a single pixel difference?
If the answers to the above is that the images you need to compare will be completely random then going with the expensive solution, or some available package might be the best bet.
If it is that you know that the images are different more often than not, and that the images typically differ quite a lot, and you really want to hand-roll a solution you could implement some initial 'quick compare' steps that would be less expensive and that would quickly identify a lot of the cases where the images are different.
For example you could resize the larger image, then either compare pixel-by-pixel (or calculate a hash of the pixel values) only a 'diagonal line' of the image (top left pixel to bottom right pixel) and by doing so exclude differing images and only do the more expensive comparison for those that pass this test.
Or take a pre-set number of points at whatever is a 'good distribution' depending on the type of image and only do the more expensive comparison for those that pass this test.
If you know a lot about the images you will be comparing, they have known characteristics and they are different more often than they are the same, implementing a cheap 'quick elimination compare' along the lines of the above could be worthwhile.
You need to look into dHash algorithm for this.
I wrote a pure java library just for this few days back. You can feed it with directory path(includes sub-directory), and it will list the duplicate images in list with absolute path which you want to delete. Alternatively, you can use it to find all unique images in a directory too.
It used awt api internally, so can't be used for Android though. Since, imageIO has problem reading alot of new types of images, i am using twelve monkeys jar which is internally used.
https://github.com/srch07/Duplicate-Image-Finder-API
Jar with dependencies bundled internally can be downloaded from, https://github.com/srch07/Duplicate-Image-Finder-API/blob/master/archives/duplicate_image_finder_1.0.jar
The api can find duplicates among images of different sizes too.
So I'm trying to run a comparison of different images and was wondering if anyone could point me in the right direction for some basic metrics I can take for the group of images.
Assuming I have two images, A and B, I pretty much want as much data as possible about each so I can later programmatically compare them. Things like "general color", "general shape", etc. would be great.
If you can help me find specific properties and algorithms to compute them that would be great!
Thanks!
EDIT: The end goal here is to be able to have a computer tell me how "similar" too pictures are. If two images are the same but in one someone blurred out a face; they should register as fairly similar. If two pictures are completely different, the computer should be able to tell.
What you are talking about is way much general and non-specific.
Image information is formalised as Entropy.
What you seem to be looking for is basically feature extraction and then comparing these features. There are tons of features that can be extracted but a lot of them could be irrelevant depending on the differences in the pictures.
There are space domain and frequency domain descriptors of the image which each can be useful here. I can probably name more than 100 descriptors but in your case, only one could be sufficient or none could be useful.
Pre-processing is also important, perhaps you could turn your images to grey-scale and then compare them.
This field is so immensely diverse, so you need to be a bit more specific.
(Update)
What you are looking for is a topic of hundreds if not thousands of scientific articles. But well, perhaps a simplistic approach can work.
So assuming that the question here is not identifying objects and there is no transform, translation, scale or rotation involved and we are only dealing with the two images which are the same but one could have more noise added upon it:
1) Image domain (space domain): Compare the pixels one by one and add up the square of the differences. Normalise this value by the width*height - just divide by the number of pixels. This could be a useful measure of similarity.
2) Frequency domain: Convert the image to frequency domain image (using FTT in an image processing tool such as OpenCV) which will be 2D as well. Do the same above squared diff as above, but perhaps you want to limit the frequencies. Then normalise by the number of pixels. This fares better on noise and translation and on a small rotation but not on scale.
SURF is a good candidate algorithm for comparing images
Wikipedia Article
A practical example (in Mathematica), identifying corresponding points in two images of the moon (rotated, colorized and blurred) :
You can also calculate sum of differences between histogram bins of those two images. But it is also not a silver bullet...
I recommend taking a look at OpenCV. The package offers most (if not all) of the techniques mentioned above.
Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.