I have this image:
I want to calculate SNR in it. For this i used code:
img=imread('noicy.JPG');
img=double(img(:));
ima=max(img(:));
imi=min(img(:));
ims=std(img(:));
snr=20*log10((ima-imi)./ims)
Is that correct code to calculate SNR?
The definition of SNR can be found here or here:
Both the standard and the industry definition can be used (10log(x) and 20log(x)). check this
now, the signal is equal to the mean of the pixel values (mean(img(:))) and the noise is the standard deviation or error value of the pixel values (std(img(:))).
You may use either the ratio or the SNR=10*log10(signal/noise) to express the result in decibel.
Related
I am reading gonzales image processing book and as you know the log transformation has been defined like the following in the book:
s = c*log(1+r)
Now I have one question:
Is the logarithm based on 10 or it is a natural logarithm which is based on napier number?
The log transform is used for dark pixels enhancement. The dark pixels in an image are expanded as compare to the higher pixel values. So the base can be any number depending on the visualization effect of image.
I think log10 is often used because it is related to the decibel scale in signal processing, such as what is used in signal to noise definition.
If this is log() from math.h, then it's the natural logarithm.
That is, it's base is e, which is approximately 2.71828.
I am working on an image processing project and I have to use entopyfilt (from matlab).
I researched and found some information to do it but not enough. I can calculate the entropy value of an image, but I don't know how to write an entropy filter. There is a similar question in the site, but I also didn't understand it.
Can anybody help me to understand entropy filter?
From the MATLAB documentation:
J = entropyfilt(I) returns the array J, where each output pixel contains the entropy value of the 9-by-9 neighborhood around the corresponding pixel in the input image I. I can have any dimension. If I has more than two dimensions, entropyfilt treats it as a multidimensional grayscale image and not as a truecolor (RGB) image. The output image J is the same size as the input image I.
For each pixel, you look at the 9 by 9 area around the pixel and calculate the entropy. Since the entropy calculation is a nonlinear calculation, it is not something you can do with a simple kernel filter. You have to loop over each pixel and do the calculation on a per-pixel basis.
I want to compute depth entropy of a depth image in Matlab (same as this work ). Unfortunately, the authors don't reply my emails. There is a function,"entropyfilt", that compute the local entropy of grayscale image. I've used this function with a depth input image that captured by Kinect but it hasn't worked probably. Here is my input depth image:
Here is the code used for entropy computing:
J = entropyfilt(Depth);
imshow(mat2gray(J))
Sorry, My "reputation view" isn't enough, so I can't upload my result image.
How can I compute entropy image of a depth image? I want to acquire an image same as figure 4 in above paper.
Thanks in advance.
It is written in the paper, for each pixel you first extract two patches from the image, then you calculate the entropy of each patch. The formula for which is also in the paper and well-known in statistics.
If you want to use the function entropyfilt, you need to provide as a second argument a mask image that describes the patch (all pixels within the patch need to be 1 in the mask, others need to be 0). This is detailed in the documentation of said function.
The authors generate one color image from two entropy images. How they do so they seemingly forgot to mention.
I think the paper is low quality.
image1= imread('where is located ')
entropy(image1)
imshow(image1)
If anybody is familiar with classification in remote sensing
you know at first we should choose a region on the image and use information on this region to extract statistical parameters.
how can I choose this area of the image in matlab?
I think I found the answer to my own question.
As our friend user2466766 said I used roipoly to have a mask image and then I multiplied this mask with my image using '.*'.
then I extracted nonzero elements of the resulted matrix with the function nonzeros.
and know I have the digital numbers of the region within the polygon in a columnal matrix that can be used to calculate statistical parameters like variance, mean and etc
Try roipoly. It allows you to create a mask image. If you are looking for more flexibility you can use poly2mask.
I have the following images :
Corrupted with 30% salt and pepper noise
After denoising
I have denoised images with various techniques
How do i compare which method is the best in terms of denoising
function PSNR = PeakSignaltoNoiseRatio(origImg, distImg)
origImg = double(origImg);
distImg = double(distImg);
[M N] = size(origImg);
error = origImg - distImg;
MSE = sum(sum(error .* error)) / (M * N);
if(MSE > 0)
PSNR = 10*log(255*255/MSE) / log(10);
else
PSNR = 99;
end
which two images should i take to calculate the PSNR?
Did you check the Wikipedia article on PSNR? For one, it gives a cleaner formula that would fix up your code (for example, why are you checking whether MSE > 0? If you defined MSE right, it has to be greater than 0. Also, this looks to be Matlab code, so use the log10() function to save some confusing base conversions. Lastly, be sure that the input to this function is actually a quantized image on the 0-255 scale, and not a double-valued image between 0 and 1).
Your question is unclear. If you want to use PSNR as a metric for performance, then you should compute the PSNR of each denoised method against the original and report those numbers. That probably won't give a very good summary of which methods are doing better, but it's a start. Another method could be to hand-select smaller sub-regions of the original image that you think correspond to different qualitative phenomena, such as a window on the background, a window on the foreground, and a window spanning the two. Then compute the PSNR for only those windows, again repeated for each denoised result vs. the original. In the end, you want a table showing PSNR of each different method as compared to the original, possibly with this sub-window breakdown.
You may want to look into more sophisticated methods depending on what application this is for. The chapter on total variation image denoising in Tony Chan's book is very helpful ( link ).
Here is Jython/Python example using DataMelt program.
Put these lines into a file "test.py" and run inside the DataMelt.
It will print PSNR value for 2 downloaded images. Replace the files names if you have different images.
from Catalano.Imaging.Tools import ObjectiveFidelity
from Catalano.Imaging import FastBitmap
from jhplot import *
print Web.get("http://jwork.org/dmelt/examples/data/logo_jhepwork.png")
print Web.get("http://jwork.org/dmelt/examples/data/logo_jhepwork_noisy.png")
original=FastBitmap("logo_jhepwork.png")
original.toGrayscale()
reconstructed=FastBitmap("logo_jhepwork_noisy.png")
reconstructed.toGrayscale()
img=ObjectiveFidelity(original,reconstructed)
print "Peak signal-to-noise ratio (PSNR)=",img.getPSNR()