Why surface fitting is not working in logarithmic domain? - image

Let's have a corrupted Image C, Bias profile, B and True Image A. So if we can define a model,
C = A * B;
We can get the original Image back as,
A = C / B;
in the log domain,
log A = log C - log B.
Now let's say, I have true image, A and I am introducing the bias B and I am getting the corrupted image C. Now I can correct this biased Image, C using the polynomial regression. I will fit the surface once I convert the corrupted image C in the log domain, and I can subtract the bias profile from it as shown above. After the subtraction, I don't need to apply exp(log C - log B) as obvious. Onlu normalization is needed to get [0 255] range.
Algorithm:
Original Image without any bias field is introduced with the polynomial profile, which results in an image having non-uniform illumination.
Biased image is converted in log domain and surface is approximated using polynomial fit
approximated surface is subtracted from the Biased image which results in original image back with no bias fields.(Ideally).
measure RMSE between approximated surface and introduced polynomial field in step 1. Measure RMSE between the Biased Image and the image we get back at the end after subtraction.
Code:
clear;clc;close all;
%read the image, on which profile is to be generated
I = ones(300);
I = padarray(I,[20,20],'both','symmetric'); % padding
%%
%creating a bias profile using polynomial modeling
[x,y] = meshgrid(1:size(I,1),1:size(I,2));
profile = -2.5.*x.^3 - 2.5.* y.^3 + 0.25 .*(x.* y.^2) - 0.3*( x.^2 .* y ) - 0.5.* x .* y - x + y - 2.5*( x.^2) - y.^2 + 0.5 .* x .*y + 1;
% come to scale [0 1]
profile = profile - min(profile(:));
profile = profile / max(profile(:));
figure,imshow(profile,[]); %introduced bias profile
%% corrupt the image
biasedImage = (I .* profile);
figure,imshow(biasedImage,[]); %biased Image
cImage = log(biasedImage+1);% conversion to log domain/ +1 is needed to avoid infinite values in case of 0 intensty values in corrupted image.
%% forming the input for prediction of surface
colorChannel = cImage;
rows = size(colorChannel, 1);
columns = size(colorChannel, 2);
[X,Y] = meshgrid(1:columns, 1:rows);
z = colorChannel;
x1d = reshape(X, numel(X), 1);
y1d = reshape(Y, numel(Y), 1);
z1d = double(reshape(z, numel(z), 1)); %dependent variables
x = [x1d y1d]; % two regressors
%% surface fitting
m = 3; %define the order of polynomial
p = polyfitn(x,z1d,m); %prediction step
zg = polyvaln(p, x);
modeledColorChannel = reshape(zg, [rows columns]); % predicted surface
%modeledColorChannel = exp(modeledColorChannel)-1; if we use this step then the step below will be division instead of subtraction
%f = biasedImage./ modeledColorChannel; Same as the step below but as we are using exponential, it will be devision.
%% correction
f = cImage- modeledColorChannel; %getting the original image back.
%grayImage = f(21:end-20,21:end-20);
%modeledColorChannel = modeledColorChannel(21:end-20,21:end-20); %to remove the padding
figure,imshow(f,[]);
figure,imshow(modeledColorChannel,[]);
%% measure the RMSE for image
y = (I - f);
RMSE = sqrt(mean(y(:).^2));
disp(RMSE);
% RMSE for profile
z = (modeledColorChannel - profile);
RMSE = sqrt(mean(z(:).^2));
disp(RMSE);
Results:
In case of: f = cImage- modeledColorChannel
1.0000
0.2127
Corrected Image: enter image description here
In case of division: f = cImage ./ modeledColorChannel (although it is not correct as per theory.)
0.0190
0.2127
Corrected Image:enter image description here
Now, the question is: I am getting lower RMSE value at the end if I do division in the log domain instead of subtraction as I am doing here(See %% correction section). How does it possible to have higher RMSE for subtraction where it is theoriticaly correct? As per my understanding if I keep all of my calculation in log domain image division will become image subtraction.It is obvious if you run the code and see the image f at the end of the correction for division and subtraction in log domain.
Note: In both the cases, RMSE between the introduced and perceived profile is same as I am doing my estimation in log domain in both the cases.Either image division or in image subtraction.
See this for polyfitn tool box.
www.mathworks.com/matlabcentral/fileexchange/34765-polyfitn

Let me add to the answer to my question as I found my mistake, in case anybody faces same issue in future.
Mistake 1: After the subtraction, I don't need to apply exp(log C - log B) as obvious. Only normalisation is needed to get [0 255] range.
My intuition was, I don't need to apply exp() to get the original values back. But in fact I have to apply exp(). Log-log on LHS and RHS never cancels each other.
if log(A) = log(B), to get the values of A back I need A = exp(log(B)).
Mistake 2: In log domain, I subtract two images so I do not have to face infinity problem in the log domain, we usually face in case of division.
So simply while converting image in log domain, I could just do,
cImage = log(biasedImage);
instead of,
cImage = log(biasedImage+1);
Here, adding +1 is creating the unwanted blur in the images because in estimation while predicting surface it will push the surface to high values in the dark areas.

Related

How to fit rotated hyperbola accurately?

My experimental data points looks like pieces of hyperbola. Below I provide a code (Matlab), which generates "dummy" data, which is very similar to original one:
function [x_out,y_out,alpha1,alpha2,ecK,offsetX,offsetY,branchDirection] = dummyGenerator(mu_alpha)
alpha_range=0.1;
numberPoint2Return=100; % number of points to return
ecK=10.^((rand(1)-0.5)*2*2); % eccentricity-related parameter
% slope of the first asimptote (radians)
alpha1 = ((rand(1)-0.5)*alpha_range+mu_alpha);
% slope of the first asimptote (radians)
alpha2 = -((rand(1)-0.5)*alpha_range+mu_alpha);
beta = pi-abs(alpha1-alpha2); % angle between asimptotes (radians)
branchDirection = datasample([0,1],1); % where branch directed
% up: branchDirection==0;
% down: branchDirection==1;
% generate branch
x = logspace(-3,2,numberPoint2Return*100)'; %over sampling
y = (tan(pi/2-beta)*x+ecK./x);
% rotate branch using branchDirection
theta = -(pi/2-alpha1)-pi*branchDirection;
% get rotation matrix
rotM = [ cos(theta), -sin(theta);
sin(theta), cos(theta) ];
% get rotated coordinates
XY1=[x,y]*rotM;
x1=XY1(:,1); y1=XY1(:,2);
% remove possible Inf
x1(~isfinite(y1))=[];
y1(~isfinite(y1))=[];
% add noise
y1=((rand(numel(y1),1)-0.5)+y1);
% downsampling
%x_out=linspace(min(x1),max(x1),numberPoint2Return)';
x_out=linspace(-10,10,numberPoint2Return)';
y_out=interp1(x1,y1,x_out,'nearest');
% randomize offset
offsetX=(rand(1)-0.5)*50;
offsetY=(rand(1)-0.5)*50;
x_out=x_out+offsetX;
y_out=y_out+offsetY;
end
Typical results are presented on figure:
The data has following important property: slopes of both asymptotes comes from the same distribution (just different signs), so for my fitting I have rather goo estimation for mu_alpha.
Now starts the problematic part. I try to fit these data points. The main idea of my approach is to find a rotation to obtain y=k*x+b/x shape and then just fit it.
I use the following code:
function Rsquare = fitFunction(x,y,alpha1,alpha2,ecK,offsetX,offsetY)
R=[];
for branchDirection=[0 1]
% translate back
xt=x-offsetX;
yt=y-offsetY;
% rotate back
theta = (pi/2-alpha1)+pi*branchDirection;
rotM = [ cos(theta), -sin(theta);
sin(theta), cos(theta) ];
XY1=[xt,yt]*rotM;
x1=XY1(:,1); y1=XY1(:,2);
% get fitted values
beta = pi-abs(alpha1-alpha2);
%xf = logspace(-3,2,10^3)';
y1=y1(x1>0);
x1=x1(x1>0);
%x1=x1-min(x1);
xf=sort(x1);
yf=(tan(pi/2-beta)*xf+ecK./xf);
R(end+1)=sum((xf-x1).^2+(yf-y1).^2);
end
Rsquare=min(R);
end
Unfortunately this code works not good, very often I have bad results, even when I use known(from simulation) initial parameters.
Could You help me to find a good solution for such fitting problem?
UPDATE:
I find a solution (see Answer), but
I still have a small problem - my estimation of aparameter is bad, sometimes I did no have good fits because of this reason.
Could You suggest some ideas how to estimate a from experimental point?
I found the main problem (it was my brain as usually)! I did not know about general equation of hyperbola. So equation for my hyperbolas are:
((x-x0)/a).^2-((y-y0)/b).^2=-1
So ,we may not take care about sign, then I may use the following code:
mu_alpha=pi/6;
[x,y,alpha1,alpha2,ecK,offsetX,offsetY,branchDirection] = dummyGenerator(mu_alpha);
% hyperb=#(alpha,a,x0,y0) tan(alpha)*a*sqrt(((x-x0)/a).^2+1)+y0;
hyperb=#(x,P) tan(P(1))*P(2)*sqrt(((x-P(3))./P(2)).^2+1)+P(4);
cost =#(P) fitFunction(x,y,P);
x0=mean(x);
y0=mean(y);
a=(max(x)-min(x))./20;
P0=[mu_alpha,a,x0,y0];
[P,fval] = fminsearch(cost,P0);
hold all
plot(x,y,'-o')
plot(x,hyperb(x,P))
function Rsquare = fitFunction(x,y,P)
%x=sort(x);
yf=tan(P(1))*P(2)*sqrt(((x-P(3))./P(2)).^2+1)+P(4);
Rsquare=sum((yf-y).^2);
end
P.S. LaTex tags did not work for me

Extract a page from a uniform background in an image

If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!

High Pass Butterworth Filter on images in MATLAB

I need to implement a high pass Butterworth filter in MATLAB for the purposes of image filtering. I have implemented one but it looks like it doesn't work. Here is the code I have written. Can anyone tell me what is wrong?
n=1;
d=50;
A=1.5;
im=imread('imagex.jpg');
h=size(im,1);
w=size(im,2);
[x y]=meshgrid(-floor(w/2):floor(w-1/2),-floor(h/2):floor(h-1/2));
hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
image_2Dfilter=fftshift(fft2(im));
Image_butterworth=image_2Dfilter;
imshow(Image_butterworth);
ifftshow(Image_butterworth);
For one thing, there is no such command called ifftshow. Secondly, you aren't filtering anything. All you're doing is visualizing the spectrum of the image.
In terms of visualizing the spectrum, how you're doing it right now is very dangerous. For one thing, you are visualizing the coefficients at each spatial frequency component which is complex-valued in nature. If you want to visualize the spectrum in a way that makes sense to most of us, it's better to take a look at either the magnitude or phase. However, because this is a Butterworth filter, it's best to apply it to the magnitude of the filter.
You can find the magnitude of the spectrum by using the abs function. Even when you do that, if you did imshow directly on the magnitude, you will get a visualization that is zero everywhere except for the middle. This is because the DC component is so large and the rest of the spectrum is small in comparison.
Let me show you an example. This is the cameraman image that is part of the image processing toolbox:
im = imread('cameraman.tif');
figure;
imshow(im);
Now, let's visualize the spectrum and ensuring that the DC component is in the centre of the image - you already did this with fftshift. It's also a good idea to cast the image to double to ensure the best precision of data. In addition, make sure you apply abs to find the magnitude:
fftim = fftshift(fft2(double(im)));
mag = abs(fftim);
figure;
imshow(mag, []);
As you can see, it's not very useful due to the reason that I mentioned. A better way to visualize the spectrum of the image is usually to apply a log transformation to the spectrum. This is also useful if you want to de-mean or remove the mean so that the dynamic range fits better for display. In other words, you would add 1 to the magnitude, then apply a logarithm to the magnitude so that higher values can taper off. It doesn't matter which base you use, so I'll just use the natural logarithm which is encapsulated by the log command:
figure;
imshow(log(1 + mag), []);
Now that's much better. Now we'll get onto your filtering mechanism. Your Butterworth filter is slightly incorrect. The meshgrid of coordinates is slightly wrong. The -1 operation that's at the ending interval needs to go outside:
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
Remember, you are defining a symmetric interval about the centre of the image, and what you had originally wasn't correct. I'd also like to mention that this looks like a high-pass filter, so the output should look like an edge detection. In addition, the definition of the Butterworth high pass filter is incorrect. The correct definition of the filter in frequency domain is:
D(u,v) is the distance from the centre of the image in frequency domain, Do is the cutoff distance while B is a controlling scale factor controlling what the desired gain would be at the cutoff distance. n is the order of the filter. Do in your case is d = 50. In practice, B = sqrt(2) - 1 so that at the cutoff distance of Do, D(u,v) = 1 / sqrt(2) = 0.707, which is the 3 dB cutoff frequency mostly seen in electronics circuit filters. Sometimes you'll see B being set to 1 for simplicity, but it's common to set this to B = sqrt(2) - 1.
However, your current code isn't doing any filtering. To filter in the frequency domain, you simply multiply the spectrum of the image with the spectrum of the filter itself. This is equivalent to convolution in the spatial domain. Once you do that, you simply undo the fftshift that was performed on the image, take the inverse FFT and then eliminate any imaginary components that are due to numerical imprecision. Also, let's cast to uint8 to make sure that we respect the original image type.
That can be done like so:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components, and cast
out = uint8(real(ifft2(out_spec)));
%// Show image
imshow(out);
If you want to see what the filtered spectrum looks like, just do this:
figure;
imshow(log(1 + abs(out_spec_centre)), []);
We get:
This makes sense. You see that in the middle of the spectrum, it's slightly darker in comparison to the outer edges of the spectrum. That's because with the high-pass Butterworth filter, you are amplifying the higher frequency terms and it gets visualized to be a higher intensity.
Now, out contains your filtered image, and we finally get this:
That looks like a fine result! However, naively casting the image to uint8 truncates any negative values to 0 and any positive values greater than 255 to 255. Because this is an edge detection, you want to detect both the negative and positive transitions... so a good idea would be to normalize the output so that it ranges from [0,1], and then cast with uint8 after you multiply by 255. This way, no changes in the image get visualized to gray, negative changes get visualized as dark and positive changes get visualized as white.... so you'd do something like this:
%// Your code with meshgrid fix
n=1;
d=50;
h=size(im,1);
w=size(im,2);
fftim = fftshift(fft2(double(im)));
[x y]=meshgrid(-floor(w/2):floor(w/2)-1,-floor(h/2):floor(h/2)-1);
%hhp=(1./(d./(x.^2+y.^2).^0.5).^(2*n));
%%%%%%// New code
B = sqrt(2) - 1; %// Define B
D = sqrt(x.^2 + y.^2); %// Define distance to centre
hhp = 1 ./ (1 + B * ((d ./ D).^(2 * n)));
out_spec_centre = fftim .* hhp;
%// Uncentre spectrum
out_spec = ifftshift(out_spec_centre);
%// Inverse FFT, get real components
out = real(ifft2(out_spec));
%// Normalize and cast
out = (out - min(out(:))) / (max(out(:)) - min(out(:)));
out = uint8(255*out);
%// Show image
imshow(out);
We get this:
I think that you should work a little bit diferent
n=1;
D0=50; % change the name for d0, d is usuaally the (u²+v²)⁽1/2)
A=1.5; % normally the amplitude is 1
im=imread('cameraman.jpg');
[M,N]=size(im); % is easy to get the h and w like this
% compute the 2d fourier transform in order to multiply
F=fft2(double(im));
% compute your filter and do the meshgrid for your matrix but it is M*n, and get only the real part
u=0:(M-1);
v=0:(N-1);
idx=find(u>M/2);
u(idx)=u(idx)-M;
idy=find(v>N/2);
v(idy)=v(idy)-N;
[V,U]=meshgrid(v,u);
D=sqrt(U.^2+V.^2);
H =A * (1./(1 + (D0./D).^(2*n)));
% multiply element by element
G=H.*F;
g=real(ifft2(double(G)));
subplot(1,2,1); imshow(im); title('Input image');
subplot(1,2,2); imshow(g,[ ]); title('filtered image');

Image Fusion using stationary wavelet transform (SWT) (MATLAB)

I'm trying to fuse two images using SWT. But I'm getting this error :
The level of decomposition 1
and the size of the image (1,5)
are not compatible.
Suggested size: (2,6)
How to change the size of the image to make it compatible for transform?
Code I've used is :
clc
i=1;
fol=1;
n=0;
for fol=1:5
f='folder';
folder = strcat(f, num2str(fol));
cd(folder)
d= numel(D);
i=(n+1);
Fname1 = strcat(int2str(i),'.bmp');
Fname2 = strcat(int2str(i+1),'.bmp');
im1 = imread(Fname1);
im2 = imread(Fname2);
im1=double(im1);
im2=double(im2);
% image decomposition using discrete stationary wavelet transform
[A1L1,H1L1,V1L1,D1L1] = swt2(im1,1,'sym2');
[A2L1,H2L1,V2L1,D2L1] = swt2(im2,1,'sym2');
% fusion start
AfL1 = 0.5*(A1L1+A2L1);
D = (abs(H1L1)-abs(H2L1))>=0;
HfL1 = D.*H1L1 + (~D).*H2L1;
D = (abs(V1L1)-abs(V2L1))>=0;
VfL1 = D.*V1L1 + (~D).*V2L1;
D = (abs(D1L1)-abs(D2L1))>=0;
DfL1 = D.*D1L1 + (~D).*D2L1;
% fused image
imf = iswt2(AfL1,HfL1,VfL1,DfL1,'sym2');
figure;
imshow(imf,[]);
Iname= strcat(int2str(fol),'.bmp');
imwrite(imf,Iname);
end
To address your first problem, that image is really small. I'm assuming that's an image of size 1 x 5. I would suggest changing your image so that it's larger, or perhaps do an imresize on the image. However, as what Ander said in his comment to you... I wouldn't call a 1 x 5 matrix an image.
To address your second problem, once you finally load in an image, the wavelet transform will most likely give you floating point numbers that are beyond the dynamic range of any sensible floating point precision image. As such, it's good that you normalize the image first, then save it to file.
Therefore, do this right before you save the image:
%// ...
%// Your code...
imshow(imf,[]);
%// Normalize the image - Change
imf = (imf - min(imf(:))) / (max(imf(:)) - min(imf(:)));
%// Your code again
%// Now save
Iname= strcat(int2str(fol),'.bmp');
imwrite(imf,Iname);
The above transformation normalizes an image so that the minimum is 0 and the maximum is 1. Once you do that, it should be visualized properly. FWIW, doing imshow(imf,[]); does this normalization for you and displays that result, but it doesn't modify the image.

How to add a Gaussian shaped object to an image?

I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!

Resources