galsim galaxy simulation with (local wcs) PSF - galsim

I would like to use Galsim to simulate a set of galaxies convolved by PSFs.
The galaxies are simple double sersic profiles that I create using the Sersic class from galsim (and then Shear to introduce some ellipticity).
The PSFs I'like to use are pixellised images computed from Zemax or CodeV ray-tracing simulations. They have been computed on a grid corresponding to the detector surface. This surface is tilted wrt the chiefray so these PSFs already include the WCS distorsion of the detector tilt.
I would like to compute the PSF-convolved galaxies on the detector surface. A possible way would be :
to load the psf
psf = galsim.InterpolatedImage(galsim.Image(psf))
then convolve by the galaxy :
gal = galsim.Convolve(psf, gal)
then draw on the detector surface (with the tilt in the local_wcs):
gal_image = gal.drawImage(wcs=local_wcs)
I think that I am doing a mistake with the PSF beeing affected twice by the disrtorsion (the original tilt in the Zemax PSF + the local_wcs of the drwImage method). Are my worries correct ?
Shall I apply the local_wcs distortion to the original unconvolved galaxy (by applying a shear corresponding to the local_wcs) and then convolve by the psf and draw it on a non distorted wcs ? Would this correctly take in account the fact that my PSF is already distorted by the detector tilt ?

I have come across a possible solution and that would be to specify the local_wcs when loading the PSF :
psf = galsim.InterpolatedImage(galsim.Image(psf), wcs=local_wcs)
Would that we a correct fix ?
Will galsim notice that it only needs to distort the galaxy and convolve by the PSF (and not un-distort the PSF, convolve the galaxy by the PSF and distort the convolved galaxy) ?

Related

Filtering in signal and system

The file “noisy blur img.mat” has been made available on Blackboard. This file contains an image with a physical side length of 1.04 m. The image was blurred by a Gaussian PSF blurring function with e −1 radius of 7 pixels, and corrupted by white Gaussian noise with a standard deviation of 12.4972.
Develop (a) an inverse filter, and (b) a Wiener-like filter to deblur and restore this image as best you can. For each part, describe in words how you implemented this filter, and show the filtered image
using wiener filter and inverse filter for a signal and system

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

Image remapping from floating-point pixel coordinates in opencv

I have a matrix with floating-point pixel coordinates and corresponding matrix of greyscale values in this floating-point pixel coordinates. I need to remap an image from floating-point pixel coordinates to the regular grid. The cv::remap function from opencv transforms a source image like this:
dst(x,y) = src(mapx(x,y), mapy(x,y))
In my case I have something like this:
dst(mapx(x,y), mapy(x,y)) = src(x,y)
From the equation above I need to determine destination image (dst(x,y)).
Is there an easy way in OpenCv to perform such remapping or can you suggest any other open source image processing library to solve the problem?
Take the four corners of your picture.
Extract their correspondent in the dst image. Store them in two point vectors: std::vector<cv::Point> dstPts, srcPts.
extract the geometric relation between them with cv::findHomography(dstPts, srcPtrs,...)
apply cv::warpPerspective(). Internally, it calculates and applies the correct remapping
It works if the transform defined in your maps is a homographic transform. It doesn't work if it's some swirling, fisheye effect, lens correction map, etc.

Applying 1D Gaussian blur to a data set

I have some data set where each object has a Value and Price. I want to apply Gaussian Blur to their Price using their Value. Since my data has only 1 component to use in blurring, I am trying to apply 1D Gaussian blur.
My code does this:
totalPrice = 0;
totalValue = 0;
for each object.OtherObjectsWithinPriceRange()
totalPrice += price;
totalValue += Math.Exp(-value*value);
price = totalPrice/totalValue;
I see good results, but the 1D Gaussian blur algorithms I see online seems to use deviations, sigma, PI, etc. Do I need them, or are they strictly for 2D Gaussian blurs? They combine these 1D blur passes as vertical and horizontal so they are still accounting for 2D.
Also I display the results as colors but the white areas are a little over 1 (white). How can I normalize this? Should I just clamp the values to 1? That's why I am wondering if I am using the correct formula.
Your code applies some sort of a blur, though definitely not Gaussian. The Gaussian blur would look something like
kindaSigma = 1;
priceBlurred = object.price;
for each object.OtherObjectsWithinPriceRange()
priceBlurred += price*Math.Exp(-value*value/kindaSigma/kindaSigma);
and that only assuming that value is proportional to a "distance" between the object and other objects within price range, whatever this "distance" in your application means.
To your questions.
2D Gaussian blur is completely equivalent to a combination of vertical and horizontal 1D Gaussian blurs done one ofter another. That's how thee 2D Gaussian blur is usually implemented in practice.
You don't need any PI or sigmas as a multiplicative factor for the Gaussian - those have an effect of merely scaling an image and can be safely ignored.
The sigma (standard deviation) under the exponent has a major impact on the result, but it is not possible for me to tell you if you need it or not. It depends on your application.
Want more blur: use larger kindaSigma in the snippet above.
Want less blur: use smaller kindaSigma.
When kindaSigma is too small, you won't notice any blur at all. When kindaSigma is too large, the Gaussian blur effectively transforms itself into a moving average filter.
Play with it and choose what you need.
I am not sure I understand your normalization question. In image processing it is common to store each color component (R,G,B) as unsigned char. So black color is represented by (0,0,0) and white color by (255,255,255). Of course, you are free to decided to choose a different presentation form and take white color as 1. But keep in mind that for the visualization packages that are using standard 8-bit presentation, the value of 1 means almost black color. So you will likely need to manipulate and renormalize your image before display.

What is optical transfer function in Image restoration?

I am studying Inverse Filtering, I was trying to code it, I sought out some references from the net. Everyone has considered optical transfer function which is no where to be seen in Gonzalez Book I am referring to.
% Inverse_Filter_Demo-
clc
clear all
close all
original_image=imread('cameraman.jpg'); %loading the original (un-blurred) image
original_image=double(original_image);
%The blur function (PSF- Point Spread Function) that will be added to the original image
PSF=fspecial('motion',20,45);
%Adding blur to the original image
degraded_image = imfilter(original_image,PSF,'circular','conv');
OTF= psf2otf(PSF,[size(degraded_image,1) size(degraded_image,2)]);%Getting OTF from PSF
Inverse_Filter=conj(OTF)./((abs(OTF)).^2); %The inverse filter
%Preforming Fourier Transform to the degraded image
FT_degraded_image=fftn(degraded_image);
%Preforming the restoration itself
restored_image=abs(ifftn(FT_degraded_image.*Inverse_Filter));
%Presenting the restoration results:
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(original_image,[0 255]);
truesize;
title('Original image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(degraded_image,[0 255]);
truesize;
title('Degraded image');
figure;
set(gca,'Fontsize',14);
colormap(gray);
imagesc(restored_image,[0 255]);
truesize;
title('Restoration of the degraded image (using Inverse Filter)');
Your question is unclear, and probably more appropriate for (dsp.stackexchange.com). However, if what you are asking is "what is the optical transfer function?" then the Wikipedia article for OTFs is a fine place to start.
The simplest way to think about it is that the Optical Transfer Function is the Fourier transform of the point-spreading function (PSF). Usually PSF is a filter (convolution kernel) that describes how a single pin-hole type light source would get smeared into an actual image via some device.
The OTF is just the amplitude/phase representation of that smearing process. It is the filter that the image's Fourier transform would be multiplied by in phase space to produce the smeared, true output image's Fourier transform (instead of convolving, which is what you do with the PSF in the spatial domain). Applying the inverse Fourier transform after applying the OTF should give you the actual image the device would produce.
For mathematical convenience, and sometimes for processing efficiency, it can be more expedient to work with the OTF instead of the regular spatial domain's PSF. This is why you'll see some algorithms and textbooks describe their methods with the OTF instead of the PSF.

Resources