a program to apply the following transformation function to a grayscale image - image

I want to apply following transformation function to a grayscale image, i know how to apply it to the following function,
my question is how do i apply a program to the following transformation function,
code so far,
clear;
pollen = imread('Fig3.10(b).jpg');
u = double(pollen);
[nx ny] = size(u)
nshades = 256;
r1 = 80; s1 = 10; % Transformation by piecewise linear function.
r2 = 140; s2 = 245;
for i = 1:nx
for j = 1:ny
if (u(i,j)< r1)
uspread(i,j) = ((s1-0)/(r1-0))*u(i,j)
end
if ((u(i,j)>=r1) & (u(i,j)<= r2))
uspread(i,j) = ((s2 - s1)/(r2 - r1))*(u(i,j) - r1)+ s1;
end
if (u(i,j)>r2)
uspread(i,j) = ((255 - s2)/(255 - r2))*(u(i,j) - r2) + s2;
end
end
end
hist= zeros(nshades,1);
for i=1:nx
for j=1:ny
for k=0:nshades-1
if uspread(i,j)==k
hist(k+1)=hist(k+1)+1;
end
end
end
end
plot(hist);
pollenspreadmat = uint8(uspread);
imwrite(pollenspreadmat, 'pollenspread.jpg');
Thanks in advance

The figure says that for any intensities that are between A and B, they should be set to C. All you have to do is modify your two for loops so that for any values between A and B, set the output location to C. I'll also assume the range is inclusive. You can simply remove the first and last if conditions and use the middle one:
for i = 1:nx
for j = 1:ny
if ((u(i,j)>=r1) && (u(i,j)<= r2))
uspread(i,j) = C;
end
end
end
C is a constant that you would set yourself. Usually for segmentation, this result is very high to distinguish the foreground from the background. You have a uint8 image here, so C = 255; would work.
However, I would recommend you achieve a more vectorized solution. Avoid for loops and use logical indexing instead:
uspread = u;
uspread(u >= r1 & u <= r2) = C;

Related

how to combine two same scene images for image registration

I try to do image registration on two grayscale images where the images were taken twice with different views. The images were taking by myself using a Lifecam camera.
To register these images, I used template matching method and normalized cross correlation as similarity measure and found the right location. But the result after combination of these two images was not good as I wish. I don't know how to fix it. Do I need to do some rotation or translation first before combine it? If so, I have no idea how to get the real angle for rotation. Or do you have any idea how to fix the image result without applying any rotation?
Input image 1:
Input Image 2:
Result:
This my code:
A = imread('image1.jpg');
B = imread('image2.jpg');
[M1, N1] = size(A); % size imej A n B
[M2, N2] = size(B);
%% finding coordinated of (r2,c2)
r1 = size(A,1)/2; % midpoint of image A as coordinate
c1 = size(A,2
template = imcrop(A,[(c1-20) (r1-20) 40 40]);
[r2, c2] = normcorr(temp,B); % Normalized cross correlation
%% count distance of coordinate (r1,c1) in image A and (r2,c2)in image B
UA = r1; % distance of coordinate (r1,c1) from top in image A
BA = M1 - r1; % distance of coordinate (r1,c1) from bottom
LA = c1; % left distance from (r1,c1)
RA = N1 - c1; % right distance from (r1,c1)
UB = r2; % finding distance of coordinate (r2,c2) from top,
BB = M2 - r2; % bottom, left and right in image B
LB = c2;
RB = N2 - c2;
%% zero padding for both image
if LA > LB
L_diff = LA - LB; % value of columns need to pad with zero on left side
B = [zeros(M2,L_diff),B];
else
L_diff = LB - LA;
A = [zeros(M1,L_diff),A];
end
if RA > RB
R_diff = RA - RB; % value of columns need to pad with zero on right side
B = [B, zeros(M2,R_diff)];
else
R_diff = RB - RA;
A = [A, zeros(M1,R_diff)];
end
N1 = size(A, 2); % renew value column image A and B
N2 = size(B, 2);
if UA > UB
U_diff = UA - UB; % value of rows need to pad with zero on top
B = [zeros(U_diff,N2);B];
else
U_diff = UB - UA;
A = [zeros(U_diff,N1);A];
end
if BA > BB
B_diff = BA - BB; % value of rows need to pad with zero on bottom
B = [B; zeros(B_diff,N2)];
else
B_diff = BB - BA;
A = [A; zeros(B_diff,N1)];
end
%% find coordinate that have double value
if LA > LB
r = r1;
c = c1;
else
r = r2;
c = c2;
end
if UA >= UB
i_Start = r - UB + 1;
else
i_Start = r - UA + 1;
end
if BA >= BB
i_Stop = r + BB ;
else
i_Stop = r + BA;
end
if LA >= LB
j_Start = c - c2 + 1;
else
j_Start = c - c1 + 1;
end
if RA >= RB
j_Stop = c + RB;
else
j_Stop = c + RA;
end
%% add image A and B
A = im2double(A);
B = im2double(B);
final_im = A + B;
for i = i_Start:i_Stop
for j = j_Start:j_Stop
final_im(i,j) = final_im(i,j)/2;
end
end
final_im = im2uint8(final_im);
The answer from rayryeng in Ryan L's first link is quite applicable here. Cross-correlation likely won't provide a close enough match between the two images since the transformation between the two images is more accurately described as a homography than a 2D rigid transform.
Accurate image registration requires that you find this projective transformation. To do so you can find a set of corresponding points in the two images (using SURF, as mentioned above, usually works well) and then use RANSAC to obtain the homography's parameters from the corresponding points. RANSAC does a nice job even when some of the "corresponding" features in your two images are actually not correct matches. Once found, you can use the transformation to move one of your images to the other's point of view and fuse.
Here's a nice explanation of feature matching, RANSAC, and fusing two images with some Matlab code samples. The lecture uses SIFT features, but the idea still works for SURF.
Best published way to perform such a registration is based on fiducial points. You can choose the most clear edges or crossing points as a fiducial and then adjust the smoothness and regularization parameter to register them together.
look at the SlicerRT package. and let me know if you face any problem.

Matlab camera oscilloscope

I am at the moment trying to simulate an oscilloscope plugged to the output of a camera in the context of digital film-making.
Here is my code :
clear all;
close all;
clc;
A = imread('06.tif');
[l,c,d] = size(A);
n=256;
B = zeros(n,c);
for i = 1:c
for j = 1:l
t = A(j,i);
B(t+1,i) = B(t+1,i) + 1;
end
end
B = B/0.45;
B = imresize(B,[l c]);
B = (B/255);
C = zeros(n,c);
for i = 1:c
for j = 1:l
t = 0.2126*A(j,i,1)+0.7152*A(j,i,2)+0.0723*A(j,i,3); // here is the supposed issue
C(t+1,i) = C(t+1,i) + 1;
end
end
C = C/0.45;
C = imresize(C,[l c]);
C = (C/255);
figure(1),imshow(B);
figure(2),imshow(C);
The problem is that I am getting breaks in the second image, and unfortunately that's the one I want as an output. My guess is that the issue is located in the linear combination done in the second for but I cannot handle it. I tried with both tif and jpg input, with different data format like uint8 in Matlab but nothing is helping...
Thank you for your attention, I stay available for any question.

Why I got this Error The variable in a parfor cannot be classified

I'm trying to use parfor to estimate the time it takes over 96 sec and I've more than one image to treat but I got this error:
The variable B in a parfor cannot be classified
this the code I've written:
Io=im2double(imread('C:My path\0.1s.tif'));
Io=double(Io);
In=Io;
sigma=[1.8 20];
[X,Y] = meshgrid(-3:3,-3:3);
G = exp(-(X.^2+Y.^2)/(2*1.8^2));
dim = size(In);
B = zeros(dim);
c = parcluster
matlabpool(c)
parfor i = 1:dim(1)
for j = 1:dim(2)
% Extract local region.
iMin = max(i-3,1);
iMax = min(i+3,dim(1));
jMin = max(j-3,1);
jMax = min(j+3,dim(2));
I = In(iMin:iMax,jMin:jMax);
% Compute Gaussian intensity weights.
H = exp(-(I-In(i,j)).^2/(2*20^2));
% Calculate bilateral filter response.
F = H.*G((iMin:iMax)-i+3+1,(jMin:jMax)-j+3+1);
B(i,j) = sum(F(:).*I(:))/sum(F(:));
end
end
matlabpool close
any Idea?
Unfortunately, it's actually dim that is confusing MATLAB in this case. You can fix it by doing
[n, m] = size(In);
parfor i = 1:n
for j = 1:m
B(i, j) = ...
end
end

Fast computation of warp matrices

For a fixed and given tform, the imwarp command in the Image Processing Toolbox
B = imwarp(A,tform)
is linear with respect to A, meaning there exists some sparse matrix W, depending on tform but independent of A, such that the above can be equivalently implemented
B(:)=W*A(:)
for all A of fixed known dimensions [n,n]. My question is whether there are fast/efficient options for computing W. The matrix form is necessary when I need the transpose operation W.'*B(:), or if I need to do W\B(:) or similar linear algebraic things which I can't do directly through imwarp alone.
I know that it is possible to compute W column-by-column by doing
E=zeros(n);
W=spalloc(n^2,n^2,4*n^2);
for i=1:n^2
E(i)=1;
tmp=imwarp(E,tform);
E(i)=0;
W(:,i)=tmp(:);
end
but this is brute force and slow.
The routine FUNC2MAT is somewhat more optimal in that it uses the loop to compute/gather the sparse entry data I,J,S of each column W(:,i). Then, after the loop, it uses this to construct the overall sparse matrix. It also offers the option of using a PARFOR loop. However, this is still slower than I would like.
Can anyone suggest more speed-optimal alternatives?
EDIT:
For those uncomfortable with my claim that imwarp(A,tform) is linear w.r.t. A, I include the demo script below, which tests that the superposition property is satisfied for random input images and tform data. It can be run repeatedly to see that the nonlinearityError is always small, and easily attributable to floating point noise.
tform=affine2d(rand(3,2));
%tform=projective2d(rand(3));
fun=#(A) imwarp(A,tform,'cubic');
I1=rand(100); I2=rand(100);
c1=rand; c2=rand;
LHS=fun(c1*I1+c2*I2); %left hand side
RHS=c1*fun(I1)+c2*fun(I2); %right hand side
linearityError = norm(LHS(:)-RHS(:),'inf')
That's actually pretty simple:
W = sparse(B(:)/A(:));
Note that W is not unique, but this operation probably produces the most sparse result. Another way to calculate it would be
W = sparse( B(:) * pinv(A(:)) );
but that results in a much less sparse (yet still valid) result.
I constructed the warping matrix using the optical flow fields [u,v] and it is working well for my application
% this function computes the warping matrix
% M x N is the size of the image
function [ Fw ] = generateFwi( u,v,M,N )
Fw = zeros(M*N, M*N);
k =1;
for i=1:M
for j= 1:N
newcoord(1) = i+u(i,j);
newcoord(2) = j+v(i,j);
newi = newcoord(1);
newj = newcoord(2);
if newi >0 && newj >0
newi1x = floor(newi);
newi1y = floor(newj);
newi2x = floor(newi);
newi2y = ceil(newj);
newi3x = ceil(newi); % four nearest points to the given point
newi3y = floor(newj);
newi4x = ceil(newi);
newi4y = ceil(newj);
x1 = [newi,newj;newi1x,newi1y];
x2 = [newi,newj;newi2x,newi2y];
x3 = [newi,newj;newi3x,newi3y];
x4 = [newi,newj;newi4x,newi4y];
w1 = pdist(x1,'euclidean');
w2 = pdist(x2,'euclidean');
w3 = pdist(x3,'euclidean');
w4 = pdist(x4,'euclidean');
if ceil(newi) == floor(newi) && ceil(newj)==floor(newj) % both the new coordinates are integers
Fw(k,(newi1x-1)*N+newi1y) = 1;
else if ceil(newi) == floor(newi) % one of the new coordinates is an integer
w = w1+w2;
w1new = w1/w;
w2new = w2/w;
W = w1new*w2new;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi2x-1)*N+newi2y;
if y1coord <= M*N && y2coord <=M*N
Fw(k,y1coord) = W/w2new;
Fw(k,y2coord) = W/w1new;
end
else if ceil(newj) == floor(newj) % one of the new coordinates is an integer
w = w1+w3;
w1 = w1/w;
w3 = w3/w;
W = w1*w3;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi3x-1)*N+newi3y;
if y1coord <= M*N && y2coord <=M*N
Fw(k,y1coord) = W/w3;
Fw(k,y2coord) = W/w1;
end
else % both the new coordinates are not integers
w = w1+w2+w3+w4;
w1 = w1/w;
w2 = w2/w;
w3 = w3/w;
w4 = w4/w;
W = w1*w2*w3 + w2*w3*w4 + w3*w4*w1 + w4*w1*w2;
y1coord = (newi1x-1)*N+newi1y;
y2coord = (newi2x-1)*N+newi2y;
y3coord = (newi3x-1)*N+newi3y;
y4coord = (newi4x-1)*N+newi4y;
if y1coord <= M*N && y2coord <= M*N && y3coord <= M*N && y4coord <= M*N
Fw(k,y1coord) = w2*w3*w4/W;
Fw(k,y2coord) = w3*w4*w1/W;
Fw(k,y3coord) = w4*w1*w2/W;
Fw(k,y4coord) = w1*w2*w3/W;
end
end
end
end
else
Fw(k,k) = 1;
end
k=k+1;
end
end
end

Discrete cosine transform (DCT) of an image

I work on a function in Matlab that calculates the DCT (discrete cosine transform) of an image. I don't know what is not working in my code, but I got an output image with the same number. I want to use this formula for my DCT.
Any ideas please.
function image_comp = dctII(image, b)
[h w] = size(image);
image = double(image) - 128;
block = zeros(b,b);
image_t=zeros(size(image));
for k=1:b:h
for l=1:b:w
image_t(k:k+b-1,l:l+b-1)= image(k:k+b-1,l:l+b-1);
for u=1:b
for v=1:b
if u == 0
Cu = 1/sqrt(2);
else
Cu = 1;
end
if v == 0
Cv = 1/sqrt(2);
else
Cv = 1;
end
Res_sum=0;
for x=1:b;
for y=1:b
Res_sum = Res_sum + ((image_t(x,y))*cos(((2*x)+1)*u*pi/(2*b))*cos(((2*y)+1)*v*pi/(2*b)));
end
end
dct= (1/4)*Cu*Cv*Res_sum;
block(u,v) = dct;
end
end
image_comp(k:k+b-1,l:l+b-1)=block(u,v);
end
end
end
In the inner loop over x and y, you are not reading from the correct place in image_t. You have copied the local block into a location with k,l as the upper left corner for use in processing, but in the inner loop you are always reading from the same block that starts at 1,1 as the upper left corner in image_t.

Resources