I am trying to do some kind of image sorting.
I have 5 images and first one is my main image. I am trying to sort images according to their similarity.(Most similar image to less similar image).
Matlab had matchfeature method but I dont think I jave used it correctly because my results are wrong.I try to use:
[indexPairs,matchmetric] = matchFeatures(features1,features2,"MatchThreshold,10")
then i try to sort the matchmetric array.But it didnt work
Can anyone tell me some algorithm or any tips ?Thank you..
You could compute the correlation coefficient between every images and your main image and then sort them based on the coefficient.
doc corr2
For example, let's say you store all your images in a cell array (called ImageCellArray) in which the first image is your "main image":
for i = 2:size(ImageCellArray,2) % size(ImageCellArray,2) is the total # of images, i.e. the size of the cell array containing them.
CorrCoeff(i) = corr2(rgb2gray(ImageCellArray{1}),rgb2gray(ImageCellArray{i}));
end
[values indices] = sort(CorrCoeff); % sort the coefficients and get the number of the corresponging image.
Then you're good to go I guess.
You could compute the PSNR (peak signal-to-noise ratio) for each image compared to the main image. PSNR is a metric commonly used to measure the quality of a reconstructed compression against the original image.
It's implemented in Matlab in the Computer Vision System toolbox as a functional block, and there is also a psnr function in the Image Processing toolbox. The result will be a number in decibels you could use to rank the images. A higher PSNR value indicates greater similarity.
Take a look at this example of image retrieval. Instead of matching the features between pairs of images it uses the KDTreeSearcher from the Statistics Toolbox to find nearest neighbors of each feature from the query image across the whole set of database images.
Related
Problem statement:
Given an input image, find and extract the image similar to that from the cluttered scene. Now from the extracted Image find the differences in the extracted image from the input image.
My Approach:
Uptill now I have used SIFT features for feature matching and affine transform to extract the image from the cluttered scene.
But I am not able to find a method good enough and feasible for me to find the difference in the input image and extracted image.
I dont think there exists a particular technique for your problem. If the traditional methods does not suite your need, maybe you can use the keypoints (SIFT) again to estimate the difference.
You have already done most work by matching image using SIFT.
Next you can use corresponding SIFT matched points to estimate the warp-affine factor. Apply required warp affine to second image and crop such that the images are super-imposable.
Now you can calculate absolute difference of the two image and SAD or SSD as a difference indication.
How do I apply a function to corresponding pixels of two images of the same resolution? Like Photoshop does when covering one layer with another one. What about more than two images?
If it was Wolfram Mathematica I would take a List of those images and transpose them to get a single "image" where each "pixel" would be an array of N pixels -- there I would apply a Mean[] function to them.
But how do I do that with vips? There are so many Vips::Image methods and only here I could find some minimal description on what do they all mean. So for example:
images = Dir["shots/*"].map{ |i| Vips::Image.new_from_file(i) }
ims = images.map(&:bandmean)
(ims.inject(:+) / ims.size).write_to_file "temp.png"
I wanted it to mean "calculating an average image" but I'm not sure what I've done here.
ruby-vips8 comes with a complete set of operator overloads, so you can just do arithmetic on images. It also does automatic common-subexpression elimination, so you don't need to be too careful about ordering or grouping, you can just write an equation and it should work well.
In your example:
require 'vips8'
images = Dir["shots/*"].map{ |i| Vips::Image.new_from_file(i) }
sum = images.reduce (:+)
avg = sum / images.length
avg.write_to_file "out.tif"
+-*/ with a constant always makes a float image, so you might want to cast the result down to uchar before saving (or maybe ushort?) or you'll have a HUGE output tiff. You could write:
avg = sum / images.length
avg.cast("uchar").write_to_file "out.tif"
By default, new_from_file opens images for random access. If your sources images are JPG or PNG, this will involve decompressing them entirely to memory (or to a disk temp if they are very large) before processing can start.
In this case, you only need to scan the input images from top to bottom as you write the result, so you can stream the images through your system. Change the new_from_file to be:
images = Dir["shots/*"].map { |i| Vips::Image.new_from_file(i, :access => "sequential") }
to hint that you will only be using the image pixels sequentially, and you should see a nice drop in memory and CPU use.
PNG is a horribly slow format, I would use tiff if possible.
You could experiment with bandrank This does something like a median filter over a set of images: you give it an array of images and at each pixel position it sorts the images by pixel value and selects the Nth one. It's a very effective way to remove transitory artifacts.
You can use condition.ifthenelse(then, else) to compute more complex functions. For example, to set all pixels greater than their local average equal to the local average, you could write:
(image > image.gaussblur(1)).ifthenelse(image.gaussblur(1), image)
You might be curious how vips will execute the program above. The code:
(images.reduce(:+) / images.length).cast("uchar")
will construct a pipeline of image processing operations: a series of vips_add() to sum the array, then a vips_linear() to do the divide, and finally a vips_cast() to knock it back to uchar.
When you call write_to_file, each core on your machine will be given a copy of the pipeline and they will queue up to process tiles from the source images as they arrive from the decompressor. Each time a line of output tiles is completed, a background thread will use the selected image write library (libtiff in my example) to send those scanlines back to disk.
You should see low memory use and good CPU utilization.
I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.
Currently I am trying to figure out the Signal to Noise Ratio of a set of images as a way of gauging the performance of my deconvolution (filtering algorithms). I Have a set of images like the one below, which show the image, before and after the algorithm:
Now, I have discovered quite a few ways of judging the performance. One of these is to use the formula for the SNR of an image, where the signal is the original image and the noise is the filtered image. Another method, as described by this question, goes about figuring out the SNR from the singular image itself. This way, I can compare the SNR ratios that I get for both images and get an all new altogether.
Therefore, my question lies in the fact that, the resources on the internet are confusing and I do not know about the "correct" way of measuring the SNR of these images and using it as a performance metric.
It really depends on what you are trying to compare, and what you deem as "signal" and "noise". In your first method, you are effectively calculating the error(or difference) between image 1 and image 2 where you assume image 2 was tinted by noise but image 1 was not (this is also a sort of signal to distortion ratio). Therefore, this measurement is relative and it measures the performance of your method of transformation from Original to Target (or distortion technique), but not the image itself. For example a new type of encrypting filter generated image 2 from image 1 and you want to measure how different the images are to work out the performance of your filter.
In the second method based on the link you posted, you are assuming that noise is present in both images but at different levels and you are measuring it against each individual image - or in other words, you are measuring the standard deviation of each individual image, which is not relative.The second measurement is usually used to compare results generated from the same source, i.e. an experiment produces N images of the same object in a controlled environment and you want to measure, for example the amount of noise present at the scene (you would use this method to work out the covariance of noise to enable you to control the experiment environment).
I'm attempting to create a true mosaic application. At the moment I have one mosaic image, ie the one the mosaic is based on and about 4000 images from my iPhoto library that act as the image library. I have already done my research and analysed the mosaic image. I've converted it into 64x64 slices each of 8 pixels. I've calculated the average colour for each slice and assertain the r, g, b and brightness (Luminance (perceived option 1) = (0.299*R + 0.587*G + 0.114*B)) value. I have done the same for each of the image library photos.
The mosaic slices table looks like so.
slice_id, slice_image_id, slice_slice_id, slice_image_column, slice_image_row, slice_colour_hex, slice_rgb_red, slice_rgb_blue, slice_rgb_green, slice_rgb_brightness
The image library table looks like so.
upload_id, upload_file, upload_colour_hex, upload_rgb_red, upload_rgb_green, upload_rgb_blue, upload_rgb_brightness
So basically I'm reading the image slices from the slices table into PHP and then pulling out the appropriate images from the library table based on the colour hexs. My trouble is that I've been on this too long and probably had too many energy drinks so am not concentrating properly, I can't figure out the way to pick out the nearest colour neighbor if the appropriate hex code doesn't exist.
Any ideas on the perfect query?
NB: I know pulling out the slices one by one is not ideal however the mosaic is only rebuilt periodically so a sudden burst in the mysql load doesn't really bother me, however if there us a way to pull the images out all at once that would also be a massive bonus.
Update Brightness Comparisons.
With Brightness
(source: buggedcom.co.uk)
Without Brightness
(source: buggedcom.co.uk)
One way to minimize the difference between the colours (in terms of their RGB components) is you would individually minimize the difference in each component. Thus you're looking for the entry with lowest
(targetRed - rowRed)^2 + (targetGreen - rowGreen)^2 + (targetBlue - rowBlue)^2
I think that you may be better off using HSL instead of RGB as color space. Formulas to compute HSL from RGB are available on the internet (and in the linked Wikipedia article), they may give you what you need to compute the best match.