I'm trying to build a script to go through my original, high-res photos and replace the old, low-res ones I uploaded to Flickr before I had a pro account.
For many of them I can just use Exif info such as date taken to determine a match. But some are really old, and either the original file didn't have Exif info, or it got clobbered by whatever stupid resizing software I used at the time.
So, unable to rely on metadata, I'm forced to resort to the content itself. The problem is that the originals are in different resolutions than the ones on Flickr (which is the whole point of this endeavour). So is there a way for me to compare them with some sort of fuzzy similarity measure that would allow me to set a threshold for requiring human input or not?
I guess knowing one image is a resized version of the other can yield better results than general similarity. A solution in any language will do, but Ruby would be a plus :)
Interesting problem, btw :)
Slow-ish solution - excellent chance of success
Use a scale-invariant feature detector to find corresponding features in both images. If the features are matched with with a high score at similar locations, then you have your match.
I'd recommend SIFT which generates a scale & rotation invariant 128-integer descriptor for a feature found in an image. SURF (available in OpenCV) is another (faster) feature point detector.
You can match features across two images via bruteforce (compare each descriptor to a descriptor in the other image) which is O(n^2) but pretty fast (especially in the VL SIFT implementation). But if you need to compare the features in one image to several images (which you might have to) you should build a tree of the features to query it with the other image's features. K-D trees are useful, and OpenCV has a nice implementation.
Fast solution - might work
Downsample your high-res image to the low-res dimensions and use a similarity measure like SAD (where the sum of the differences between block of, say, 3x3 pixels around a pixel in both images is the score) to determine a match.
I'd recommend scripting a solution off of ImageMagick. The following (from the documentation on comparing images with IM) would output a comparative value that you can use.
convert image1 image2 \
-compose difference -composite -colorspace gray miff:- |\
identify -verbose - |\
sed -n '/^.*Mean: */{s//scale=2;/;s/(.*)//;s/$/*100\/32768/;p;q;}' | bc
Compute the normalized color histogram of both images and compare them using some method (histogram intersection, for example - see the link above). Note the normalized histogram is needed because the images present different resolutions. If the images are so dissimilar, they are not the same picture. But if they are similar, you have one of these two cases: (i) they are the same picture or (ii) they are different pictures but present similar global color distributions.
For case (ii), split the images and rectangular tiles and repeat the process, comparing correspondent tiles. You are trying to account for local properties of the image. Rank the results and pick the best match.
Related
Problem statement:
Given an input image, find and extract the image similar to that from the cluttered scene. Now from the extracted Image find the differences in the extracted image from the input image.
My Approach:
Uptill now I have used SIFT features for feature matching and affine transform to extract the image from the cluttered scene.
But I am not able to find a method good enough and feasible for me to find the difference in the input image and extracted image.
I dont think there exists a particular technique for your problem. If the traditional methods does not suite your need, maybe you can use the keypoints (SIFT) again to estimate the difference.
You have already done most work by matching image using SIFT.
Next you can use corresponding SIFT matched points to estimate the warp-affine factor. Apply required warp affine to second image and crop such that the images are super-imposable.
Now you can calculate absolute difference of the two image and SAD or SSD as a difference indication.
i need a way to determine wheter a picture is a photograph or not. I've got a bunch of random image files (paper document scans, logos and of course photographs taken by a camera) and i need to filter out only the photographs for creating a preview.
The solution proposed at Determine if image is photograph or drawing, quickly only works in a limited way (i.e. some logos are completly black with wite font, some logos have only colors in it - no white areas) and sometimes i've got scan of a white paper containing multiple photographs with white space arround - i need to identify those, too - because then i have to key out the white part and save the photographs on the scan in seperate files.
Your process to do this should probably be similar to the following:
Extract features from the image (pixel values, groups of pixels,
HoG, SIFT, GIST, DCT, Wavelet, Dictionary learning coefficients,
etc. depending on how much time you have)
Aggregate these features somehow so that you get a fixed length
vector (histogram, pyramid scheme)
Apply a standard classification (SVM, k-NN, neural network, Random
Forest) or clustering algorithm (k-means, GMM, etc.) and measure how
well it works (F1 score is usually okay, ROC may be better for
2-class problems)
Repeat from step 1 with different features if you are unsatisfied with the results from 3
The solution you reference seems to be pretty reasonable in terms of steps 1 and 2.
A simple next step in extracting and aggregating features could be to create histograms from all pixel values in the image. If you have a lot of labeled data you should feed these features to a standard classifier. Otherwise, run a clustering algorithm on these histogram features and check the cluster assignments to see if they are correlated with the photograph/non-photograph assignment.
Check the following paper:
http://www.vision.ee.ethz.ch/~gallju/projects/houghforest/houghforest.html . They provide source code.
I believe the program accepts an input file with negative and positive images for training. The output of the classification part of it will be a image voting map (hough map?). You might need to decide on a threshold value to locate regions of interest. So if there two logos in the image it will mark out both of them. The algorithm worked very well for me in a past.
Training on 100 positive and 100 negative images should be enough, I believe. Don't use big images for training also (256x256 should be enough).
Basically, suppose that I have a fingerprint. I know the dimension of my image, and I know that the fingerprint is black on a white background or that it is green on a black background or something like that.
Is there a way to process only the parts that delimit the image, in this case, the fingerprint? What I'm trying to do is basically this:
1) Delimit fingerprint
2) Extract the important points to compare to other fingerprints
3) Find best match on a database of other fingerprints that had their points previously extracted
I already have methods for 2 and 3, so now I just would have to delimit the image.
Programming language would have to be Ruby, Java or C++. Ruby preferred, then Java, and God help me if I have to use C++. I don't have any experience with image processing, but I'd like to do this with multiple common formats such as jpg, gif, png, if possible.
I think that the best way to do it is applying a edge detection filter to your image.
There are may approaches as suggested by wikipedia (article), but noone of them is trivial because they work on gradients or kernels. You should check Canny Edge Detection that should be enough straight-forward to implement: tutorial.
In any case if you want to avoid going deep into implementation details you should use OpenCV that is a computer vision library able to do these things in a simple way. You can use it for sure in C++ and Java but I think that a wrapper for Ruby is offered too. This is a simple example using that library with Canny algorithm.
EDIT: actually my answer covers point 2-3, so I'm wondering what you mean by delimiting the image? Think about the fact that scaling or rotating must be considered too if you want to compare different fingerprints: you need a fuzzy comparator.. maybe you should work on the Fast Fouried Transform version of the image that can handle such things in a better way.
An easy approach could be using threshold, like:
Convert your image to grayscale - so you have fingerprint in white on black.
Find a threshold value that gets most of the fingerprint.
Use open operation (http://en.wikipedia.org/wiki/Mathematical_morphology) to remove noise.
(experiment with dilate a few times)
Find the center of gravity (x,y) of the image and the standard deviation (vx, vy).
In the box:
[x-2vx,y-2vy],
[x-2vx,y+2vy],
[x+2vx,y+2vy],
[x+2vx,y-2vy]
You will find 95.4% of the pixels
You could narrow the box down to find the actual max and min pixels in it, if you have many outliers.
Use the box to clip from the original image.
It is simple method that might work well for your situation :)
I need to create fingerprints of many images (about 100.000 existing, 1000 new per day, RGB, JPEG, max size 800x800) to compare every image to every other image very fast. I can't use binary compare methods because also images which are nearly similar should be recognized.
Best would be an existing library, but also some hints to existing algorithms would help me a lot.
Normal hashing or CRC calculation algorithms do not work well with image data. The dimensional nature of the information must be taken into account.
If you need extremely robust fingerprinting, such that affine transformations (scaling, rotation, translation, flipping) are accounted for, you can use a Radon transformation on the image source to produce a normative mapping of the image data - store this with each image and then compare just the fingerprints. This is a complex algorithm and not for the faint of heart.
a few simple solutions are possible:
Create a luminosity histogram for the image as a fingerprint
Create scaled down versions of each image as a fingerprint
Combine technique (1) and (2) into a hybrid approach for improved comparison quality
A luminosity histogram (especially one that is separated into RGB components) is a reasonable fingerprint for an image - and can be implemented quite efficiently. Subtracting one histogram from another will produce a new historgram which you can process to decide how similar two images are. Histograms, because the only evaluate the distribution and occurrence of luminosity/color information handle affine transformations quite well. If you quantize each color component's luminosity information down to an 8-bit value, 768 bytes of storage are sufficient for the fingerprint of an image of almost any reasonable size. Luminosity histograms produce false negatives when the color information in an image is manipulated. If you apply transformations like contrast/brightness, posterize, color shifting, luminosity information changes. False positives are also possible with certain types of images ... such as landscapes and images where a single color dominates others.
Using scaled images is another way to reduce the information density of the image to a level that is easier to compare. Reductions below 10% of the original image size generally lose too much of the information to be of use - so an 800x800 pixel image can be scaled down to 80x80 and still provide enough information to perform decent fingerprinting. Unlike histogram data, you have to perform anisotropic scaling of the image data when the source resolutions have varying aspect ratios. In other words, reducing a 300x800 image into an 80x80 thumbnail causes deformation of the image, such that when compared with a 300x500 image (that's very similar) will cause false negatives. Thumbnail fingerprints also often produce false negatives when affine transformations are involved. If you flip or rotate an image, its thumbnail will be quite different from the original and may result in a false positive.
Combining both techniques is a reasonable way to hedge your bets and reduce the occurence of both false positives and false negatives.
There is a much less ad-hoc approach than the scaled down image variants that have been proposed here that retains their general flavor, but which gives a much more rigorous mathematical basis for what is going on.
Take a Haar wavelet of the image. Basically the Haar wavelet is the succession of differences from the lower resolution images to each higher resolution image, but weighted by how deep you are in the 'tree' of mipmaps. The calculation is straightforward. Then once you have the Haar wavelet appropriately weighted, throw away all but the k largest coefficients (in terms of absolute value), normalize the vector and save it.
If you take the dot product of two of those normalized vectors it gives you a measure of similarity with 1 being nearly identical. I posted more information over here.
You should definitely take a look at phash.
For image comparison there is this php project :
https://github.com/kennethrapp/phasher
And my little javascript clone:
https://redaktor.me/phasher/demo_js/index.html
Unfortunately this is "bitcount"-based but will recognize rotated images.
Another approach in javascript was to build a luminosity histogram from the image by the help of canvas. You can visualize a polygon histogram on the canvas and compare that polygon in your database (e.g. mySQL spatial ...)
A long time ago I worked on a system that had some similar characteristics, and this is an approximation of the algorithm we followed:
Divide the picture into zones. In our case we were dealing with 4:3 resolution video, so we used 12 zones. Doing this takes the resolution of the source images out of the picture.
For each zone, calculate an overall color - the average of all pixels in the zone
For the entire image, calculate an overall color - the average of all zones
So for each image, you're storing n + 1 integer values, where n is the number of zones you're tracking.
For comparisons, you also need to look at each color channel individually.
For the overall image, compare the color channels for the overall colors to see if they are within a certain threshold - say, 10%
If the images are within the threshold, next compare each zone. If all zones also are within the threshold, the images are a strong enough match that you can at least flag them for further comparison.
This lets you quickly discard images that are not matches; you can also use more zones and/or apply the algorithm recursively to get stronger match confidence.
Similar to Ic's answer - you might try comparing the images at multiple resolutions. So each image get saved as 1x1, 2x2, 4x4 .. 800x800. If the lowest resolution doesn't match (subject to a threshold), you can immediately reject it. If it does match, you can compare them at the next higher resolution, and so on..
Also - if the images share any similar structure, such as medical images, you might be able to extract that structure into a description that is easier/faster to compare.
As of 2015 (back to the future... on this 2009 question which is now high-ranked in Google) image similarity can be computed using Deep Learning techniques. The family of algorithms known as Auto Encoders can create a vector representation which is searchable for similarity. There is a demo here.
One way you can do this is to resize the image and drop the resolution significantly (to 200x200 maybe?), storing a smaller (pixel-averaged) version for doing the comparison. Then define a tolerance threshold and compare each pixel. If the RGB of all pixels are within the tolerance, you've got a match.
Your initial run through is O(n^2) but if you catalog all matches, each new image is just an O(n) algorithm to compare (you only have to compare it to each previously inserted image). It will eventually break down however as the list of images to compare becomes larger, but I think you're safe for a while.
After 400 days of running, you'll have 500,000 images, which means (discounting the time to resize the image down) 200(H)*200(W)*500,000(images)*3(RGB) = 60,000,000,000 comparisons. If every image is an exact match, you're going to be falling behind, but that's probably not going to be the case, right? Remember, you can discount an image as a match as soon as a single comparison falls outside your threshold.
Do you literally want to compare every image against the others? What is the application? Maybe you just need some kind of indexing and retrieval of images based on certain descriptors? Then for example you can look at MPEG-7 standard for Multimedia Content Description Interface. Then you could compare the different image descriptors, which will be not that accurate but much faster.
So you want to do "fingerprint matching" that's pretty different than "image matching". Fingerprints' analysis has been deeply studied during the past 20 years, and several interesting algorithms have been developed to ensure the right detection rate (with respect to FAR and FRR measures - False Acceptance Rate and False Rejection Rate).
I suggest you to better look to LFA (Local Feature Analysis) class of detection techniques, mostly built on minutiae inspection. Minutiae are specific characteristics of any fingerprint, and have been classified in several classes. Mapping a raster image to a minutiae map is what actually most of Public Authorities do to file criminals or terrorists.
See here for further references
For iPhone image comparison and image similarity development check out:
http://sites.google.com/site/imagecomparison/
To see it in action, check out eyeBuy Visual Search on the iTunes AppStore.
It seems that specialised image hashing algorithms are an area of active research but perhaps a normal hash calculation of the image bytes would do the trick.
Are you seeking byte-identical images rather than looking for images that are derived from the same source but may be a different format or resolution (which strikes me as a rather hard problem).
Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.