Filtering in signal and system - filter

The file “noisy blur img.mat” has been made available on Blackboard. This file contains an image with a physical side length of 1.04 m. The image was blurred by a Gaussian PSF blurring function with e −1 radius of 7 pixels, and corrupted by white Gaussian noise with a standard deviation of 12.4972.
Develop (a) an inverse filter, and (b) a Wiener-like filter to deblur and restore this image as best you can. For each part, describe in words how you implemented this filter, and show the filtered image
using wiener filter and inverse filter for a signal and system

Related

Restoring Image corrupted by Gaussian and Motion Blur

An image is given to us that has been corrupted by:
Gaussian blur
Gaussian noise
Motion blur
in that order. The parameters of all the above (filter size, variance, SNR, etc) are known to us.
How can we restore the image?
I have tried to compute the aggregate degradation function by convolving the above and then used the Weiner filter to restore, but the attempts have failed so far, since the blur still remains.
Could anyone please shed some light?
For Gaussian and motion blur, it is a matter of deducing the convolution kernel. Once it is known, deconvolution can be done in Fourier space. The Fourier transform of the image, divided by the Fourier transform of the kernel, gives the Fourier transform of a (hopefully) improved image.
Gaussians transform into other Gaussians, so there is no problem with divide by zero. But Gaussians do fall of rather fast, as exp(-x^2), so you'd be dividing by small numbers to obtain large whacky high frequency amplitudes. So, some sort constant bias or other way of keeping the FT of the kernal from getting small must be applied. That's where the Wiener filter comes in. The bias is usually chosen in relation to random noise levels, or quantization.
For motion blur, a typical case is when the clean image is convolved with a short line segment. Unfortunately, sharply cut-off line segments have plenty of zeros. Again, Wiener filter to the rescue.
Additive Guassian noise cannot be removed, but can be averaged out. The simplest quickest way is to blur the image with Gaussian, box, or other filter. Biggest problem with that - you end up with a blurred image! Median filters are somewhat better at preserving edges and details if not too small. There are many noise reduction techniques out there.
Sometimes noise reduction is easy for certain types of images. For Cassini imaging work, most image features were either high-contrast hard edges (planet edges, craters), or softly varying (cloud details in atmospheres) so I used an edge detector, fattened (dilated) its output, blurred it, and used that as a mask to protect parts of the image from a small-radius blur filter. Applying different filters.
There's Signal Processing Stack Exchange site (in beta for now) which may have questions and answers about restoring corrupted images. https://dsp.stackexchange.com/questions

How to Mask Operations openCV - Frequency masking

I would like to create a frequency mask in openCV but have no idea how to go about it. The frequency mask will be an ideal bandpass filter so image filtering will be done in the frequency domain. For this example lets say frequencies between 100Hz-200Hz will be coupled.
Anyone know how to do this?
Thanks is advance!
Perform DFT of your image.
Make an filter matrix with size of your image, where you have ring with radius R1 = wavelength_min, R2=wavelength_max. Ring is filled with ones and the rest elements are zeros.
Multiply DFT of your image by this matrix.
Perform inverse DFT of obtained image.

Scaling Laplacian of Gaussian Edge Detection

I am using Laplacian of Gaussian for edge detection using a combination of what is described in http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm and http://wwwmath.tau.ac.il/~turkel/notes/Maini.pdf
Simply put, I'm using this equation :
for(int i = -(kernelSize/2); i<=(kernelSize/2); i++)
{
for(int j = -(kernelSize/2); j<=(kernelSize/2); j++)
{
double L_xy = -1/(Math.PI * Math.pow(sigma,4))*(1 - ((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))))*Math.exp(-((Math.pow(i,2) + Math.pow(j,2))/(2*Math.pow(sigma,2))));
L_xy*=426.3;
}
}
and using up the L_xy variable to build the LoG kernel.
The problem is, when the image size is larger, application of the same kernel is making the filter more sensitive to noise. The edge sharpness is also not the same.
Let me put an example here...
Suppose we've got this image:
Using a value of sigma = 0.9 and a kernel size of 5 x 5 matrix on a 480 × 264 pixel version of this image, we get the following output:
However, if we use the same values on a 1920 × 1080 pixels version of this image (same sigma value and kernel size), we get something like this:
[Both the images are scaled down version of an even larger image. The scaling down was done using a photo editor, which means the data contained in the images are not exactly similar. But, at least, they should be very near.]
Given that the larger image is roughly 4 times the smaller one... I also tried scaling the sigma by factor of 4 (sigma*=4) and the output was... you guessed it right, a black canvas.
Could you please help me realize how to implement a LoG edge detector that finds the same features from an input signal, even if the incoming signal is scaled up or down (scaling factor will be given).
Looking at your images, I suppose you are working in 24-bit RGB. When you increase your sigma, the response of your filter weakens accordingly, thus what you get in the larger image with a larger kernel are values close to zero, which are either truncated or so close to zero that your display cannot distinguish.
To make differentials across different scales comparable, you should use the scale-space differential operator (Lindeberg et al.):
Essentially, differential operators are applied to the Gaussian kernel function (G_{\sigma}) and the result (or alternatively the convolution kernel; it is just a scalar multiplier anyways) is scaled by \sigma^{\gamma}. Here L is the input image and LoG is Laplacian of Gaussian -image.
When the order of differential is 2, \gammais typically set to 2.
Then you should get quite similar magnitude in both images.
Sources:
[1] Lindeberg: "Scale-space theory in computer vision" 1993
[2] Frangi et al. "Multiscale vessel enhancement filtering" 1998

Applying 1D Gaussian blur to a data set

I have some data set where each object has a Value and Price. I want to apply Gaussian Blur to their Price using their Value. Since my data has only 1 component to use in blurring, I am trying to apply 1D Gaussian blur.
My code does this:
totalPrice = 0;
totalValue = 0;
for each object.OtherObjectsWithinPriceRange()
totalPrice += price;
totalValue += Math.Exp(-value*value);
price = totalPrice/totalValue;
I see good results, but the 1D Gaussian blur algorithms I see online seems to use deviations, sigma, PI, etc. Do I need them, or are they strictly for 2D Gaussian blurs? They combine these 1D blur passes as vertical and horizontal so they are still accounting for 2D.
Also I display the results as colors but the white areas are a little over 1 (white). How can I normalize this? Should I just clamp the values to 1? That's why I am wondering if I am using the correct formula.
Your code applies some sort of a blur, though definitely not Gaussian. The Gaussian blur would look something like
kindaSigma = 1;
priceBlurred = object.price;
for each object.OtherObjectsWithinPriceRange()
priceBlurred += price*Math.Exp(-value*value/kindaSigma/kindaSigma);
and that only assuming that value is proportional to a "distance" between the object and other objects within price range, whatever this "distance" in your application means.
To your questions.
2D Gaussian blur is completely equivalent to a combination of vertical and horizontal 1D Gaussian blurs done one ofter another. That's how thee 2D Gaussian blur is usually implemented in practice.
You don't need any PI or sigmas as a multiplicative factor for the Gaussian - those have an effect of merely scaling an image and can be safely ignored.
The sigma (standard deviation) under the exponent has a major impact on the result, but it is not possible for me to tell you if you need it or not. It depends on your application.
Want more blur: use larger kindaSigma in the snippet above.
Want less blur: use smaller kindaSigma.
When kindaSigma is too small, you won't notice any blur at all. When kindaSigma is too large, the Gaussian blur effectively transforms itself into a moving average filter.
Play with it and choose what you need.
I am not sure I understand your normalization question. In image processing it is common to store each color component (R,G,B) as unsigned char. So black color is represented by (0,0,0) and white color by (255,255,255). Of course, you are free to decided to choose a different presentation form and take white color as 1. But keep in mind that for the visualization packages that are using standard 8-bit presentation, the value of 1 means almost black color. So you will likely need to manipulate and renormalize your image before display.

What is optical transfer function in Image restoration?

I am studying Inverse Filtering, I was trying to code it, I sought out some references from the net. Everyone has considered optical transfer function which is no where to be seen in Gonzalez Book I am referring to.
% Inverse_Filter_Demo-
clc
clear all
close all
original_image=imread('cameraman.jpg'); %loading the original (un-blurred) image
original_image=double(original_image);
%The blur function (PSF- Point Spread Function) that will be added to the original image
PSF=fspecial('motion',20,45);
%Adding blur to the original image
degraded_image = imfilter(original_image,PSF,'circular','conv');
OTF= psf2otf(PSF,[size(degraded_image,1) size(degraded_image,2)]);%Getting OTF from PSF
Inverse_Filter=conj(OTF)./((abs(OTF)).^2); %The inverse filter
%Preforming Fourier Transform to the degraded image
FT_degraded_image=fftn(degraded_image);
%Preforming the restoration itself
restored_image=abs(ifftn(FT_degraded_image.*Inverse_Filter));
%Presenting the restoration results:
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(original_image,[0 255]);
truesize;
title('Original image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(degraded_image,[0 255]);
truesize;
title('Degraded image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(restored_image,[0 255]);
truesize;
title('Restoration of the degraded image (using Inverse Filter)');
Your question is unclear, and probably more appropriate for (dsp.stackexchange.com). However, if what you are asking is "what is the optical transfer function?" then the Wikipedia article for OTFs is a fine place to start.
The simplest way to think about it is that the Optical Transfer Function is the Fourier transform of the point-spreading function (PSF). Usually PSF is a filter (convolution kernel) that describes how a single pin-hole type light source would get smeared into an actual image via some device.
The OTF is just the amplitude/phase representation of that smearing process. It is the filter that the image's Fourier transform would be multiplied by in phase space to produce the smeared, true output image's Fourier transform (instead of convolving, which is what you do with the PSF in the spatial domain). Applying the inverse Fourier transform after applying the OTF should give you the actual image the device would produce.
For mathematical convenience, and sometimes for processing efficiency, it can be more expedient to work with the OTF instead of the regular spatial domain's PSF. This is why you'll see some algorithms and textbooks describe their methods with the OTF instead of the PSF.

Resources