How to measure Peak signal to noise ratio of images? - image

I have the following images :
Corrupted with 30% salt and pepper noise
After denoising
I have denoised images with various techniques
How do i compare which method is the best in terms of denoising
function PSNR = PeakSignaltoNoiseRatio(origImg, distImg)
origImg = double(origImg);
distImg = double(distImg);
[M N] = size(origImg);
error = origImg - distImg;
MSE = sum(sum(error .* error)) / (M * N);
if(MSE > 0)
PSNR = 10*log(255*255/MSE) / log(10);
else
PSNR = 99;
end
which two images should i take to calculate the PSNR?

Did you check the Wikipedia article on PSNR? For one, it gives a cleaner formula that would fix up your code (for example, why are you checking whether MSE > 0? If you defined MSE right, it has to be greater than 0. Also, this looks to be Matlab code, so use the log10() function to save some confusing base conversions. Lastly, be sure that the input to this function is actually a quantized image on the 0-255 scale, and not a double-valued image between 0 and 1).
Your question is unclear. If you want to use PSNR as a metric for performance, then you should compute the PSNR of each denoised method against the original and report those numbers. That probably won't give a very good summary of which methods are doing better, but it's a start. Another method could be to hand-select smaller sub-regions of the original image that you think correspond to different qualitative phenomena, such as a window on the background, a window on the foreground, and a window spanning the two. Then compute the PSNR for only those windows, again repeated for each denoised result vs. the original. In the end, you want a table showing PSNR of each different method as compared to the original, possibly with this sub-window breakdown.
You may want to look into more sophisticated methods depending on what application this is for. The chapter on total variation image denoising in Tony Chan's book is very helpful ( link ).

Here is Jython/Python example using DataMelt program.
Put these lines into a file "test.py" and run inside the DataMelt.
It will print PSNR value for 2 downloaded images. Replace the files names if you have different images.
from Catalano.Imaging.Tools import ObjectiveFidelity
from Catalano.Imaging import FastBitmap
from jhplot import *
print Web.get("http://jwork.org/dmelt/examples/data/logo_jhepwork.png")
print Web.get("http://jwork.org/dmelt/examples/data/logo_jhepwork_noisy.png")
original=FastBitmap("logo_jhepwork.png")
original.toGrayscale()
reconstructed=FastBitmap("logo_jhepwork_noisy.png")
reconstructed.toGrayscale()
img=ObjectiveFidelity(original,reconstructed)
print "Peak signal-to-noise ratio (PSNR)=",img.getPSNR()

Related

How do I check if two images are the same but compressed differently?

As an example if I upload an image to imgur twice and once on another website there's a fair chance that all 3 images will have different checksums. jpeg is lossy so I can't simply check if the pixels match.
How do I check if have the same picture encoded different? I don't want to write an algorithm, I want to use a library or offline app via CLI
Additional information: I prefer it to be considered different pictures if it's cropped differently but for my use case it won't matter (and I can simply check the width and height if I want that?)
There is a Imgur bot called "repoststatistics" that uses dHash to compare images
How the bot works:
https://www.hackerfactor.com/blog/index.php?/archives/2013/01/21.html
What library you can use to do the same:
https://github.com/benhoyt/dhash
https://github.com/Rayraegah/dhash
"I've found that dhash is great for detecting near duplicates, but because of the simplicity of the algorithm, it's not great at finding similar images or duplicate-but-cropped images -- you'd need a more sophisticated image fingerprint if you want that. However, the dhash is good for finding exact duplicates and near duplicates, for example, the same image with slightly altered lighting, a few pixels of cropping, or very light photoshopping."
This task definition will definitely include some kind of heuristic.
The same image can be represented using a myriad of different ways in memory, so you're basically asking if the images are similar from a human's perspective.
Step 1:
Looking at the size of the image could be a very good first step. It's easy, cheap and actually serves as a good way baseline before trying to compare the actual contents of the pictures.
Step 2:
Now that we know the images are of the same size, we get to the hard part: how do we compare the contents.
There are a ton of different approaches here. The simplest ones are probably calculating the l-2 Norm or Mean Squared Error (MSE) or using Structural Similarity Index (SSIM).
Furthermore, you can try to account for small color deviations by converting the image to grayscale first.
Here is a python script that compares sizes, converts to grayscale and uses SSIM to compare them with a controllable threshold:
#!/usr/bin/env python
from cv2 import imread, cvtColor, COLOR_BGR2GRAY
from skimage.metrics import structural_similarity
def get_size(im):
w, h, d = im.shape
return (w, h)
def main(first, second, ssim_threshold):
first_im = imread(first)
first_sz = get_size(first_im)
second_im = imread(second)
second_sz = get_size(second_im)
if first_sz != second_sz:
print(f'Image sizes differ {first_sz} != {second_sz}')
return False
first_gs = cvtColor(first_im, COLOR_BGR2GRAY)
second_gs = cvtColor(second_im, COLOR_BGR2GRAY)
ssim = structural_similarity(first_gs, second_gs)
if ssim < ssim_threshold:
print(f'Image SSIM lower than allowed threshold [{ssim:.5f} < {ssim_threshold}]')
return False
print('Images are the same')
return True
if __name__ == '__main__':
import sys
import argparse
parser = argparse.ArgumentParser(description='Compare two images')
parser.add_argument('first', type=str,
help='First image for comparison')
parser.add_argument('second', type=str,
help='Second image for comparison')
parser.add_argument('ssim', type=float, default=0.95,
help='Structural similarity minimum (Range: (0, 1], 1 means identical)')
args = parser.parse_args()
same = main(args.first, args.second, args.ssim)
sys.exit(0 if same else 1)
I used it on the following photos:
Like this
>>> python compare.py 1.jpeg 2.jpeg 0.90
Image SSIM lower than allowed threshold [0.88254 < 0.95]
>>> python compare.py 1.jpeg 3.jpeg 0.90
Images are the same
Notes:
Notice that with a 0.9 threshold, the same image with a filter was identifies as the same, but the grayscale one wasn't, you should probably find the right threshold for your usecase
The script exits with 0 = equal and 1 = not-equal, so that you can automate it
It uses Python3 and the following packages:
opencv-python
scikit-image
Yes, you can simply check the width and height if you want to consider pictures different if they're cropped differently. That could be done before (or, if different, instead of) the following procedure to compare picture A and picture B:
If A and B are of the same type (and, for lossy types, quality), diff them.
If A and B are of different lossless types, convert them to the same lossless type (by converting A to the type of B, B to the type of A, or A and B to a third lossless type), and diff them.
If A is of a lossless type and B is of a lossy type, convert A to the type of B of the same quality, and diff them.
If A and B are of different lossy types or qualities, whether the sources of A and B were the same is generally unknowable because some information was lost. In this case, the best you can do is decide whether A and B are similar enough by a method such as the ones described in the comments on your question; but, beware, however unlikely, different encodings of different image data may look the same.

Combine multiple images using vips (ruby-vips8)

How do I apply a function to corresponding pixels of two images of the same resolution? Like Photoshop does when covering one layer with another one. What about more than two images?
If it was Wolfram Mathematica I would take a List of those images and transpose them to get a single "image" where each "pixel" would be an array of N pixels -- there I would apply a Mean[] function to them.
But how do I do that with vips? There are so many Vips::Image methods and only here I could find some minimal description on what do they all mean. So for example:
images = Dir["shots/*"].map{ |i| Vips::Image.new_from_file(i) }
ims = images.map(&:bandmean)
(ims.inject(:+) / ims.size).write_to_file "temp.png"
I wanted it to mean "calculating an average image" but I'm not sure what I've done here.
ruby-vips8 comes with a complete set of operator overloads, so you can just do arithmetic on images. It also does automatic common-subexpression elimination, so you don't need to be too careful about ordering or grouping, you can just write an equation and it should work well.
In your example:
require 'vips8'
images = Dir["shots/*"].map{ |i| Vips::Image.new_from_file(i) }
sum = images.reduce (:+)
avg = sum / images.length
avg.write_to_file "out.tif"
+-*/ with a constant always makes a float image, so you might want to cast the result down to uchar before saving (or maybe ushort?) or you'll have a HUGE output tiff. You could write:
avg = sum / images.length
avg.cast("uchar").write_to_file "out.tif"
By default, new_from_file opens images for random access. If your sources images are JPG or PNG, this will involve decompressing them entirely to memory (or to a disk temp if they are very large) before processing can start.
In this case, you only need to scan the input images from top to bottom as you write the result, so you can stream the images through your system. Change the new_from_file to be:
images = Dir["shots/*"].map { |i| Vips::Image.new_from_file(i, :access => "sequential") }
to hint that you will only be using the image pixels sequentially, and you should see a nice drop in memory and CPU use.
PNG is a horribly slow format, I would use tiff if possible.
You could experiment with bandrank This does something like a median filter over a set of images: you give it an array of images and at each pixel position it sorts the images by pixel value and selects the Nth one. It's a very effective way to remove transitory artifacts.
You can use condition.ifthenelse(then, else) to compute more complex functions. For example, to set all pixels greater than their local average equal to the local average, you could write:
(image > image.gaussblur(1)).ifthenelse(image.gaussblur(1), image)
You might be curious how vips will execute the program above. The code:
(images.reduce(:+) / images.length).cast("uchar")
will construct a pipeline of image processing operations: a series of vips_add() to sum the array, then a vips_linear() to do the divide, and finally a vips_cast() to knock it back to uchar.
When you call write_to_file, each core on your machine will be given a copy of the pipeline and they will queue up to process tiles from the source images as they arrive from the decompressor. Each time a line of output tiles is completed, a background thread will use the selected image write library (libtiff in my example) to send those scanlines back to disk.
You should see low memory use and good CPU utilization.

Image sort with matlab

I am trying to do some kind of image sorting.
I have 5 images and first one is my main image. I am trying to sort images according to their similarity.(Most similar image to less similar image).
Matlab had matchfeature method but I dont think I jave used it correctly because my results are wrong.I try to use:
[indexPairs,matchmetric] = matchFeatures(features1,features2,"MatchThreshold,10")
then i try to sort the matchmetric array.But it didnt work
Can anyone tell me some algorithm or any tips ?Thank you..
You could compute the correlation coefficient between every images and your main image and then sort them based on the coefficient.
doc corr2
For example, let's say you store all your images in a cell array (called ImageCellArray) in which the first image is your "main image":
for i = 2:size(ImageCellArray,2) % size(ImageCellArray,2) is the total # of images, i.e. the size of the cell array containing them.
CorrCoeff(i) = corr2(rgb2gray(ImageCellArray{1}),rgb2gray(ImageCellArray{i}));
end
[values indices] = sort(CorrCoeff); % sort the coefficients and get the number of the corresponging image.
Then you're good to go I guess.
You could compute the PSNR (peak signal-to-noise ratio) for each image compared to the main image. PSNR is a metric commonly used to measure the quality of a reconstructed compression against the original image.
It's implemented in Matlab in the Computer Vision System toolbox as a functional block, and there is also a psnr function in the Image Processing toolbox. The result will be a number in decibels you could use to rank the images. A higher PSNR value indicates greater similarity.
Take a look at this example of image retrieval. Instead of matching the features between pairs of images it uses the KDTreeSearcher from the Statistics Toolbox to find nearest neighbors of each feature from the query image across the whole set of database images.

How to average multiple images using Octave and matrix manipulation to reduce noise?

UPDATE
Here is my code that is meant to add up the two matrices and using element by element addition and then divide by two.
function [ finish ] = stackAndMeanImage (initFrame, finalFrame)
cd 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
pkg load image;
i = initFrame;
f = finalFrame;
astr = num2str(i);
tmp = imread(astr, 'jpg');
d = f - i
for a = 1:d
a
astr = num2str(i + 1);
read_tmp = imread(astr, 'jpg');
read_tmp = rgb2gray(read_tmp);
tmp = tmp :+ read_tmp;
tmp = tmp / 2;
end
imwrite(tmp, 'meanimage.JPG');
finish = 'done';
end
Here are two example input images
http://imgur.com/5DR1ccS,AWBEI0d#1
And here is one output image
http://imgur.com/aX6b0kj
I am really confused as to what is happening. I have not implemented what the other answers have said yet though.
OLD
I am working on an image processing project where I am now manually choosing images that are 'empty' or only have the background, so that my algorithm can compute the differences and then do some more analysis, I have a simple piece of code that computes the mean of the two images, which I have converted to grayscale matrices, but this only works for two images, because when I find the mean of two, then take this mean and find the mean of this versus the next image, and do this repeatedly, I end up with a washed out white image that is absolutely useless. You can't even see anything.
I found that there is a function in Matlab called imFuse that is able to average images. I was wondering if anyone knew the process that imFuse uses to combine images, I am happy to implement this into Octave, or if anyone knew of or has already written a piece of code that achieves something similiar to this. Again, I am not asking for anyone to write code for me, just wondering what the process for this is and if there are already pre-existing functions out there, which I have not found after my research.
Thanks,
AeroVTP
You should not end up with a washed-out image. Instead, you should end up with an image, which is technically speaking temporally low-pass filtered. What this means is that half of the information content is form the last image, one quarter from the second last image, one eight from the third last image, etc.
Actually, the effect in a moving image is similar to a display with slow response time.
If you are ending up with a white image, you are doing something wrong. nkjt's guess of type challenges is a good one. Another possibility is that you have forgotten to divide by two after summing the two images.
One more thing... If you are doing linear operations (such as averaging) on images, your image intensity scale should be linear. If you just use the RGB values or some grayscale values simply calculated from them, you may get bitten by the nonlinearity of the image. This property is called the gamma correction. (Admittedly, most image processing programs just ignore the problem, as it is not always a big challenge.)
As your project calculates differences of images, you should take this into account. I suggest using linearised floating point values. Unfortunately, the linearisation depends on the source of your image data.
On the other hand, averaging often the most efficient way of reducing noise. So, there you are in the right track assuming the images are similar enough.
However, after having a look at your images, it seems that you may actually want to do something else than to average the image. If I understand your intention correctly, you would like to get rid of the cars in your road cam to give you just the carless background which you could then subtract from the image to get the cars.
If that is what you want to do, you should consider using a median filter instead of averaging. What this means is that you take for example 11 consecutive frames. Then for each pixel you have 11 different values. Now you order (sort) these values and take the middle (6th) one as the background pixel value.
If your road is empty most of the time (at least 6 frames of 11), then the 6th sample will represent the road regardless of the colour of the cars passing your camera.
If you have an empty road, the result from the median filtering is close to averaging. (Averaging is better with Gaussian white noise, but the difference is not very big.) But your averaging will be affected by white or black cars, whereas median filtering is not.
The problem with median filtering is that it is computationally intensive. I am very sorry I speak very broken and ancient Octave, so I cannot give you any useful code. In MatLab or PyLab you would stack, say, 11 images to a M x N x 11 array, and then use a single median command along the depth axis. (When I say intensive, I do not mean it couldn't be done in real time with your data. It can, but it is much more complicated than averaging.)
If you have really a lot of traffic, the road is visible behind the cars less than half of the time. Then the median trick will fail. You will need to take more samples and then find the most typical value, because it is likely to be the road (unless all cars have similar colours). There it will help a lot to use the colour image, as cars look more different from each other in RGB or HSV than in grayscale.
Unfortunately, if you need to resort to this type of processing, the path is slightly slippery and rocky. Average is very easy and fast, median is easy (but not that fast), but then things tend to get rather complicated.
Another BTW came into my mind. If you want to have a rolling average, there is a very simple and effective way to calculate it with an arbitrary length (arbitrary number of frames to average):
# N is the number of images to average
# P[i] are the input frames
# S is a sum accumulator (sum of N frames)
# calculate the sum of the first N frames
S <- 0
I <- 0
while I < N
S <- S + P[I]
I <- I + 1
# save_img() saves an averaged image
while there are images to process
save_img(S / N)
S <- -P[I-N] + S + P[I]
I <- I + 1
Of course, you'll probably want to use for-loops, and += and -= operators, but still the idea is there. For each frame you only need one subtraction, one addition, and one division by a constant (which can be modified into a multiplication or even a bitwise shift in some cases if you are in a hurry).
I may have misunderstood your problem but I think what you're trying to do is the following. Basically, read all images into a matrix and then use mean(). This is providing that you are able to put them all in memory.
function [finish] = stackAndMeanImage (ini_frame, final_frame)
pkg load image;
dir_path = 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
imgs = cell (1, 1, d);
## read all images into a cell array
current_frame = ini_frame;
for n = 1:(final_frame - ini_frame)
fname = fullfile (dir_path, sprintf ("%i", current_frame++));
imgs{n} = rgb2gray (imread (fname, "jpg"));
endfor
## create 3D matrix out of all frames and calculate mean across 3rd dimension
imgs = cell2mat (imgs);
avg = mean (imgs, 3);
## mean returns double precision so we cast it back to uint8 after
## rescaling it to range [0 1]. This assumes that images were all
## originally uint8, but since they are jpgs, that's a safe assumption
avg = im2uint8 (avg ./255);
imwrite (avg, fullfile (dir_path, "meanimage.jpg"));
finish = "done";
endfunction

Determine a color “how much of a single color is in the image”

I’m trying to calculate an average value of one color over the whole image in order to determine how color, saturation or intencity or eny other value describing this changes between frmaes of the video.
However i would like to get just one value that will describe whole frame (and sigle, chosen color in it). Calculating simple average value of color in frame gives me very small differences between video frames, just 2-3 on a 0..255 space.
Is there any other method to determine color of the image other than histogram which as i understand will give me more than one value describing single frame.
Which library are you using for image processing? If it's OpenCV (or Matlab) then the steps here will be quite easy. Otherwise you'd need to look around and experiment a bit.
Use a Mean Shift filter on RGB (or gray, whichever) to cluster the colors in the image - nearly similar colors are clustered together. This lessens the number of colors to deal with.
Change to gray-level and compute a frequency histogram with bins [0...255] of pixel values that are present in the image
The highest frequency - the median - will correspond to the bin (color) that is present the most. The frequency of each bin will give you the no. of pixels of the color that is present in the frame.
Take the median value as the single color to describe your frame - the color present in the largest amount in the frame.
The key point here is if the above steps are fast enough for realtime video. You'd have to try to find out I guess.
Worst case scenario, you could loop over all the pixels in the image and do a count. Not sure what you are using programming wise but I use Python with Numpy something similar to this. Where pb is a gtk pixbuf with my image in it.
def pull_color_out(self, pb, *args):
counter = 0
dat = pb.get_pixels_array().copy()
for y in range(0,pb.get_width()):
for x in range(0,pb.get_height()):
p = dat[x][y]
#counts pure red pixels
if p[1] = 255 and p[2] = 0 and p[3] = 0:
counter += 1
return counter
Other than that, I would normally use a histogram and get the data I need from that. Mainly, this will not be your fastest option, especially for a video, but if you have time or just a few frames then hack away :P

Resources