What is optical transfer function in Image restoration? - image

I am studying Inverse Filtering, I was trying to code it, I sought out some references from the net. Everyone has considered optical transfer function which is no where to be seen in Gonzalez Book I am referring to.
% Inverse_Filter_Demo-
clc
clear all
close all
original_image=imread('cameraman.jpg'); %loading the original (un-blurred) image
original_image=double(original_image);
%The blur function (PSF- Point Spread Function) that will be added to the original image
PSF=fspecial('motion',20,45);
%Adding blur to the original image
degraded_image = imfilter(original_image,PSF,'circular','conv');
OTF= psf2otf(PSF,[size(degraded_image,1) size(degraded_image,2)]);%Getting OTF from PSF
Inverse_Filter=conj(OTF)./((abs(OTF)).^2); %The inverse filter
%Preforming Fourier Transform to the degraded image
FT_degraded_image=fftn(degraded_image);
%Preforming the restoration itself
restored_image=abs(ifftn(FT_degraded_image.*Inverse_Filter));
%Presenting the restoration results:
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(original_image,[0 255]);
truesize;
title('Original image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(degraded_image,[0 255]);
truesize;
title('Degraded image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(restored_image,[0 255]);
truesize;
title('Restoration of the degraded image (using Inverse Filter)');

Your question is unclear, and probably more appropriate for (dsp.stackexchange.com). However, if what you are asking is "what is the optical transfer function?" then the Wikipedia article for OTFs is a fine place to start.
The simplest way to think about it is that the Optical Transfer Function is the Fourier transform of the point-spreading function (PSF). Usually PSF is a filter (convolution kernel) that describes how a single pin-hole type light source would get smeared into an actual image via some device.
The OTF is just the amplitude/phase representation of that smearing process. It is the filter that the image's Fourier transform would be multiplied by in phase space to produce the smeared, true output image's Fourier transform (instead of convolving, which is what you do with the PSF in the spatial domain). Applying the inverse Fourier transform after applying the OTF should give you the actual image the device would produce.
For mathematical convenience, and sometimes for processing efficiency, it can be more expedient to work with the OTF instead of the regular spatial domain's PSF. This is why you'll see some algorithms and textbooks describe their methods with the OTF instead of the PSF.

Related

Filtering in signal and system

The file “noisy blur img.mat” has been made available on Blackboard. This file contains an image with a physical side length of 1.04 m. The image was blurred by a Gaussian PSF blurring function with e −1 radius of 7 pixels, and corrupted by white Gaussian noise with a standard deviation of 12.4972.
Develop (a) an inverse filter, and (b) a Wiener-like filter to deblur and restore this image as best you can. For each part, describe in words how you implemented this filter, and show the filtered image
using wiener filter and inverse filter for a signal and system

Analysing how well an image is represented?

Is there some algorithm that I can use to analyse image representation accuracies? Do people such as compression algorithm designers have some sort of objective way of comparing two image representations?
Say I'm trying to display a circle as a raster image; the higher the resolution, the closer the image comes to a perfect circle. The representations clearly become more accurate as you go along.
->
->
Now, how can I measure how close a particular representation of the circle is to the circle?
One method I came up with was to measure the area of the bits that didn't match between the high res and low res image (the XOR):
4.12%
1.15%
But how would I apply this to a non-silhouette image such as a photo or an anti-aliased image?
I assume that you are not thinking of mosaic images, which are easy to detect from the pattern of repeated values.
For a natural image, the question does not make sense. The image is as accurate as it can, performing area sampling (and in any case you have no ground truth).
This is your antialiased image:

extract motion blur of an image in matlab

I found that there are some paper said can analysis the gradient histogram
(blur image has gradient follows a heavy-tailed distribution)
or using fft (blur image has lower frequency)
Is there a way to detect if an image is blurry?
to detect blur in image.
But I am not quite sure how to implement it in matlab. How to define the threshold value and so on.
[Gx, Gy] = imgradientxy(a);
G = sqrt(Gx.^2+Gy.^2)
What should I do after running the command and find the G?
What should I do if I wanna plot a graph of number of pixel verse G
I am new to matlab and image processing. Could anyone kindly provide more details of how to implement it
Preparation: we read the cameraman image, which is often used for visualizing image processing algorithms, and add some motion blur.
origIm = imread('cameraman.tif');
littleBlurredIm = imfilter(origIm,fspecial('motion',5,45),'replicate');
muchBlurredIm = imfilter(origIm,fspecial('motion',20,45),'replicate');
which gives us the following images to start with:
To calculate the Laplacian, you can use the imgradient function, which returns magnitude and angle, so we'll simply discard the angle:
[lpOrigIm,~] = imgradient(origIm);
[lpLittleBlurredIm,~] = imgradient(littleBlurredIm);
[lpMuchBlurredIm,~] = imgradient(muchBlurredIm);
which gives:
You can visually see that the original image has very sharp and clear edges. The image with a little blur still has some features, and the image with much blur only contains a few non-zero values.
As proposed in the answer by nikie to this question, we can now create some measure for the blurriness. A (more or less) robust measure would for example be the median of the top 0.1% of the values:
% Number of pixels to look at: 0.1%
nPx = round(0.001*numel(origIm));
% Sort values to pick top values
sortedOrigIm = sort(lpOrigIm(:));
sortedLittleBlurredIm = sort(lpLittleBlurredIm(:));
sortedMuchBlurredIm = sort(lpMuchBlurredIm(:));
% Calculate measure
measureOrigIm = median(sortedOrigIm(end-nPx+1:end));
measureLittleBlurredIm = median(sortedLittleBlurredIm(end-nPx+1:end));
measureMuchBlurredIm = median(sortedMuchBlurredIm(end-nPx+1:end));
Which gives the following results:
Original image: 823.7
Little Blurred image: 593.1
Much Blurred image: 490.3
Here is a comparison of this blurriness measure for different motion blur angles and blur amplitudes.
Finally, I tried it on the test images from the answer linked above:
which gives
Interpretation: As you see it is possible to detect, if an image is blurred. It however appears difficult to detect how strongly blurred the image is, as this also depends on the angle of the blur with relation to the scene, and due to the imperfect gradient calculation. Further the absolute value is very much scene-dependent, so you might have to put some prior knowledge about the scene into the interpretation of this value.
This is a very interesting topic.
Although gradient magnitude can be used as good feature for blur detection but this feature will fail when dealing with uniform regions in images. In other words, this feature will not be able to distinguish between blur and flat regions. There are many other solutions. Some of them detect flat regions to avoid classifying flat regions as blur. if you want more information you can check these links:
You can find many good recent papers in cvpr conference.
Many of them they have websites where they discuss the details and provide the code.
This one http://www.cse.cuhk.edu.hk/leojia/projects/dblurdetect/
is one of the papers that I worked on
you can find the code available.
You can check also other papers in cvpr. most of them they have the code
this is another one
http://shijianping.me/jnb/index.html

To imread Parula image in Matlab without losing resolution

There is no bijection between RGB and Parula, discussed here.
I am thinking how to do well the image processing of files in Parula.
This challenge has been developed from this thread about removing black color from ECG images by extending the case to a generalized problem with Parula colors.
Data:
which is generated by
[X,Y,Z] = peaks(25);
imgParula = surf(X,Y,Z);
view(2);
axis off;
It is not the point of this thread to use this code in your solution to read the second image.
Code:
[imgParula, map, alpha] = imread('http://i.stack.imgur.com/tVMO2.png');
where map is [] and alpha is a completely white image. Doing imshow(imgParula) gives
where you see a lot of interference and lost of resolution because Matlab reads images as RGB, although the actual colormap is Parula.
Resizing this picture does not improve resolution.
How can you read image into corresponding colormap in Matlab?
I did not find any parameter to specify the colormap in reading.
The Problem
There is a one-to-one mapping from indexed colors in the parula colormap to RGB triplets. However, no such one-to-one mapping exists to reverse this process to convert a parula indexed color back to RGB (indeed there are an infinite number ways to do so). Thus, there is no one-to-one correspondence or bijection between the two spaces. The plot below, which shows the R, G, and B values for each parula index, makes this clearer.
This is the case for most indexed colors. Any solution to this problem will be non-unique.
A Built-in Solution
I after playing around with this a bit, I realized that there's already a built-in function that may be sufficient: rgb2ind, which converts RGB image data to indexed image data. This function uses dither (which in turn calls the mex function ditherc) to perform the inverse colormap transformation.
Here's a demonstration that uses JPEG compression to add noise and distort the colors in the original parula index data:
img0 = peaks(32); % Generate sample data
img0 = img0-min(img0(:));
img0 = floor(255*img0./max(img0(:))); % Convert to 0-255
fname = [tempname '.jpg']; % Save file in temp directory
map = parula(256); % Parula colormap
imwrite(img0,map,fname,'Quality',50); % Write data to compressed JPEG
img1 = imread(fname); % Read RGB JPEG file data
img2 = rgb2ind(img1,map,'nodither'); % Convert RGB data to parula colormap
figure;
image(img0); % Original indexed data
colormap(map);
axis image;
figure;
image(img1); % RGB JPEG file data
axis image;
figure;
image(img2); % rgb2ind indexed image data
colormap(map);
axis image;
This should produce images similar to the first three below.
Alternative Solution: Color Difference
Another way to accomplish this task is by comparing the difference between the colors in the RGB image with the RGB values that correspond to each colormap index. The standard way to do this is by calculating ΔE in the CIE L*a*b* color space. I've implemented a form of this in a general function called rgb2map that can be downloaded from my GitHub. This code relies on makecform and applycform in the Image Processing Toolbox to convert from RGB to the 1976 CIE L*a*b* color space.
The following code will produce an image like the one on the right above:
img3 = rgb2map(img1,map);
figure;
image(img3); % rgb2map indexed image data
colormap(map);
axis image;
For each RGB pixel in an input image, rgb2map calculates the color difference between it and every RGB triplet in the input colormap using the CIE 1976 standard. The min function is used to find the index of the minimum ΔE (if more than one minimum value exists, the index of the first is returned). More sophisticated means can be used to select the "best" color in the case of multiple ΔE minima, but they will be more costly.
Conclusions
As a final example, I used an image of the namesake Parula bird to compare the two methods in the figure below. The two results are quite different for this image. If you manually adjust rgb2map to use the more complex CIE 1994 color difference standard, you'll get yet another rendering. However, for images that more closely match the original parula colormap (as above) both should return more similar results. Importantly, rgb2ind benefits from calling mex functions and is almost 100 times faster than rgb2map despite several optimizations in my code (if the CIE 1994 standard is used, it's about 700 times faster).
Lastly, those who want to learn more about colormaps in Matlab, should read this four-part MathWorks blog post by Steve Eddins on the new parula colormap.
Update 6-20-2015: rgb2map code described above updated to use different color space transforms, which improves it's speed by almost a factor of two.

Restoring Image corrupted by Gaussian and Motion Blur

An image is given to us that has been corrupted by:
Gaussian blur
Gaussian noise
Motion blur
in that order. The parameters of all the above (filter size, variance, SNR, etc) are known to us.
How can we restore the image?
I have tried to compute the aggregate degradation function by convolving the above and then used the Weiner filter to restore, but the attempts have failed so far, since the blur still remains.
Could anyone please shed some light?
For Gaussian and motion blur, it is a matter of deducing the convolution kernel. Once it is known, deconvolution can be done in Fourier space. The Fourier transform of the image, divided by the Fourier transform of the kernel, gives the Fourier transform of a (hopefully) improved image.
Gaussians transform into other Gaussians, so there is no problem with divide by zero. But Gaussians do fall of rather fast, as exp(-x^2), so you'd be dividing by small numbers to obtain large whacky high frequency amplitudes. So, some sort constant bias or other way of keeping the FT of the kernal from getting small must be applied. That's where the Wiener filter comes in. The bias is usually chosen in relation to random noise levels, or quantization.
For motion blur, a typical case is when the clean image is convolved with a short line segment. Unfortunately, sharply cut-off line segments have plenty of zeros. Again, Wiener filter to the rescue.
Additive Guassian noise cannot be removed, but can be averaged out. The simplest quickest way is to blur the image with Gaussian, box, or other filter. Biggest problem with that - you end up with a blurred image! Median filters are somewhat better at preserving edges and details if not too small. There are many noise reduction techniques out there.
Sometimes noise reduction is easy for certain types of images. For Cassini imaging work, most image features were either high-contrast hard edges (planet edges, craters), or softly varying (cloud details in atmospheres) so I used an edge detector, fattened (dilated) its output, blurred it, and used that as a mask to protect parts of the image from a small-radius blur filter. Applying different filters.
There's Signal Processing Stack Exchange site (in beta for now) which may have questions and answers about restoring corrupted images. https://dsp.stackexchange.com/questions

Resources