I have an image and I would like to blur it in one specific direction and distance using Matlab.
I found out there is a filter called fspecial('motion',len,theta).
Here there is an example:
I = imread('cameraman.tif');
imshow(I);
H = fspecial('motion',20,45);
MotionBlur = imfilter(I,H,'replicate');
imshow(MotionBlur);
However the blurred picture is blurred in 2 directions! In this case 225 and 45 degrees.
What should it do in order to blur it just in a specific direction (e.g. 45) and not both?
I think you want what's called a "comet" kernel. I'm not sure what kernel is used for the "motion" blur, but I'd guess that it's symmetrical based on the image you provided.
Here is some code to play with that applies the comet kernel in one direction. You'll have to change things around if you want an arbitrary angle. You can see from the output that it's smearing in one direction, since there is a black band on only one side (due to the lack of pixels there).
L = 5; % kernel width
sigma=0.2; % kernel smoothness
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
H1 = exp((-sigma.*X.^2)+(-sigma.*Y.^2));
kernel = H1/sum((H1(:)));
Hflag = double((X>0));
comet_kernel = Hflag.*H1;
comet_kernel=comet_kernel/sum(comet_kernel(:));
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Updated code: This will apply an arbitrary rotation to the comet kernel. Note also the difference between sigma in the previous example and sx and sy here, which control the length and width parameters of the kernel, as suggested by Andras in the comments.
L = 5; % kernel width
sx=3;
sy=10;
theta=0;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)))
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Based on Anger Density's answer I wrote this code that solves my problem completely:
L = 10; % kernel width
sx=0.1;
sy=100;
THETA = ([0,45,90,135,180,225,270,320,360])*pi/180;
for i=1:length(THETA)
theta=(THETA(i)+pi)*-1;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)));
smearedImage = conv2(double(I),comet_kernel,'same');
% Fix edges
smearedImage(:,[1:L, end-L:end]) = I(:,[1:L, end-L:end]); % Left/Right edge
smearedImage([1:L, end-L:end], :) = I([1:L, end-L:end], :); % Top/bottom edge
% Keep only inner blur
smearedImage(L:end-L,L:end-L) = min(smearedImage(L:end-L,L:end-L),double(I(L:end-L,L:end-L)));
figure
imshow(smearedImage,[]);
title(num2str(THETA(i)*180/pi))
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
end
Related
I have 2 greyscale images that i am trying to align using scalar scaling 1 , rotation matrix [2,2] and translation vector [2,1]. I can calculate image1's transformed coordinates as
y = s*R*x + t;
Below the resulting images are shown.
The first image is image1 before transformation,
the second image is image1 (red) with attempted interpolation using interp2 shown on top of image2 (green)
The third image is when i manually insert the pixel values from image1 into an empty array (that has the same size as image2) using the transformed coordinates.
From this we can see that the coordinate transformation must have been successful, as the images are aligned although not perfectly (which is to be expected since only 2 coordinates were used in calculating s, R and t) .
How come interp2 is not producing a result more similar to when i manually insert pixel values?
Below the code for doing this is included:
Interpolation code
function [transformed_image] = interpolate_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
% doesn't help if i use get_grid that the other function is using here
[~, grid_xr, grid_yr] = get_ipgrid(im_r);
[x_t, grid_xt, grid_yt] = get_ipgrid(im_t);
y = s*R*x_t + t;
yx = reshape(y(1,:), m,n);
yy = reshape(y(2,:), m,n);
transformed_image = interp2(grid_xr, grid_yr, im_r, yx, yy, 'nearest');
end
function [x, grid_x, grid_y] = get_ipgrid(image)
[m,n] = size(image);
[grid_x,grid_y] = meshgrid(1:n,1:m);
x = [reshape(grid_x, 1, []); reshape(grid_y, 1, [])]; % X is [2xM*N] coordinate pairs
end
The manual code
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(round(yx(i,j)),round(yy(i,j))) = im_r(i,j);
end
end
end
end
function [x, grid_x, grid_y] = get_grid(image)
[m,n] = size(image);
[grid_y,grid_x] = meshgrid(1:n,1:m);
x = [grid_x(:) grid_y(:)]'; % X is [2xM*N] coordinate pairs
end
Can anyone see what i'm doing wrong with interp2? I feel like i have tried everything
Turns out i got interpolation all wrong.
In my question i calculate the coordinates of im1 in im2.
However the way interpolation works is that i need to calculate the coordinates of im2 in im1 such that i can map the image as shown below.
This means that i also calculated the wrong s,R and t since they were used to transform im1 -> im2, where as i needed im2 -> im1. (this is also called the inverse transform). Below is the manual code, that is basically the same as interp2 with nearest neighbour interpolation
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(i,j) = im_r(round(yx(i,j)),round(yy(i,j)));
end
end
end
end
I was trying to implement the IBVS algorithm (the one explained in the Introduction here) in MATLAB myself, but I am facing the following problem : The algorithm seems to work only for the cases that the camera does not have to change its orientation in respect to the world frame.For example, if I just try to make one vertex of the initial (almost) square go closer to its opposite vertex, the algorithm does not work, as can be seen in the following image
The red x are the desired projections, the blue circles are the initial ones and the green ones are the ones I get from my algorithm.
Also the errors are not exponentially dereasing as they should.
What am I doing wrong? I am attaching my MATLAB code which is fully runable. If anyone could take a look, I would be really grateful. I took out the code that was performing the plotting. I hope it is more readable now. Visual servoing has to be performed with at least 4 target points, because else the problem has no unique solution. If you are willing to help, I would suggest you take a look at the calc_Rotation_matrix() function to check that the rotation matrix is properly calculated, then verify that the line ds = vc; in euler_ode is correct. The camera orientation is expressed in Euler angles according to this convention. Finally, one could check if the interaction matrix L is properly calculated.
function VisualServo()
global A3D B3D C3D D3D A B C D Ad Bd Cd Dd
%coordinates of the 4 points wrt camera frame
A3D = [-0.2633;0.27547;0.8956];
B3D = [0.2863;-0.2749;0.8937];
C3D = [-0.2637;-0.2746;0.8977];
D3D = [0.2866;0.2751;0.8916];
%initial projections (computed here only to show their relation with the desired ones)
A=A3D(1:2)/A3D(3);
B=B3D(1:2)/B3D(3);
C=C3D(1:2)/C3D(3);
D=D3D(1:2)/D3D(3);
%initial camera position and orientation
%orientation is expressed in Euler angles (X-Y-Z around the inertial frame
%of reference)
cam=[0;0;0;0;0;0];
%desired projections
Ad=A+[0.1;0];
Bd=B;
Cd=C+[0.1;0];
Dd=D;
t0 = 0;
tf = 50;
s0 = cam;
%time step
dt=0.01;
t = euler_ode(t0, tf, dt, s0);
end
function ts = euler_ode(t0,tf,dt,s0)
global A3D B3D C3D D3D Ad Bd Cd Dd
s = s0;
ts=[];
for t=t0:dt:tf
ts(end+1)=t;
cam = s;
% rotation matrix R_WCS_CCS
R = calc_Rotation_matrix(cam(4),cam(5),cam(6));
r = cam(1:3);
% 3D coordinates of the 4 points wrt the NEW camera frame
A3D_cam = R'*(A3D-r);
B3D_cam = R'*(B3D-r);
C3D_cam = R'*(C3D-r);
D3D_cam = R'*(D3D-r);
% NEW projections
A=A3D_cam(1:2)/A3D_cam(3);
B=B3D_cam(1:2)/B3D_cam(3);
C=C3D_cam(1:2)/C3D_cam(3);
D=D3D_cam(1:2)/D3D_cam(3);
% computing the L matrices
L1 = L_matrix(A(1),A(2),A3D_cam(3));
L2 = L_matrix(B(1),B(2),B3D_cam(3));
L3 = L_matrix(C(1),C(2),C3D_cam(3));
L4 = L_matrix(D(1),D(2),D3D_cam(3));
L = [L1;L2;L3;L4];
%updating the projection errors
e = [A-Ad;B-Bd;C-Cd;D-Dd];
%compute camera velocity
vc = -0.5*pinv(L)*e;
%change of the camera position and orientation
ds = vc;
%update camera position and orientation
s = s + ds*dt;
end
ts(end+1)=tf+dt;
end
function R = calc_Rotation_matrix(theta_x, theta_y, theta_z)
Rx = [1 0 0; 0 cos(theta_x) -sin(theta_x); 0 sin(theta_x) cos(theta_x)];
Ry = [cos(theta_y) 0 sin(theta_y); 0 1 0; -sin(theta_y) 0 cos(theta_y)];
Rz = [cos(theta_z) -sin(theta_z) 0; sin(theta_z) cos(theta_z) 0; 0 0 1];
R = Rx*Ry*Rz;
end
function L = L_matrix(x,y,z)
L = [-1/z,0,x/z,x*y,-(1+x^2),y;
0,-1/z,y/z,1+y^2,-x*y,-x];
end
Cases that work:
Ad=2*A;
Bd=2*B;
Cd=2*C;
Dd=2*D;
Ad=A+1;
Bd=B+1;
Cd=C+1;
Dd=D+1;
Ad=2*A+1;
Bd=2*B+1;
Cd=2*C+1;
Dd=2*D+1;
Cases that do NOT work:
Rotation by 90 degrees and zoom out (zoom out alone works, but I am doing it here for better visualization)
Ad=2*D;
Bd=2*C;
Cd=2*A;
Dd=2*B;
Your problem comes from the way you move the camera from the resulting visual servoing velocity. Rather than
cam = cam + vc*dt;
you should compute the new camera position using the exponential map
cam = cam*expm(vc*dt)
I'm using normxcorr2 to find the area that exactly match with my pattern and i also want to find the other area(in the red rectangle) that is look like the pattern. I think it will be works if i can find the next maximum and so on and that value must not in the first maximum area or the first one that it has been detected but i can't do it. Or if you have any idea that using normxcorr2 to find the others area please advise me, I don't have any idea at all.
Here's my code. I modified from this one http://www.mathworks.com/products/demos/image/cross_correlation/imreg.html
onion = imread('pattern103.jpg'); %pattern image
peppers = imread('rsz_1jib-159.jpg'); %Original image
onion = rgb2gray(onion);
peppers = rgb2gray(peppers);
%imshow(onion)
%figure, imshow(peppers)
c = normxcorr2(onion,peppers);
figure, surf(c), shading flat
% offset found by correlation
[max_c, imax] = max(abs(c(:)));
[ypeak, xpeak] = ind2sub(size(c),imax(1));
corr_offset = [(xpeak-size(onion,2))
(size(onion,1)-ypeak)]; %size of window show of max value
offset = corr_offset;
xoffset = offset(1);
yoffset = offset(2);
xbegin = round(xoffset+1); fprintf(['xbegin = ',num2str(xbegin)]);fprintf('\n');
xend = round(xoffset+ size(onion,2));fprintf(['xend = ',num2str(xbegin)]);fprintf('\n');
ybegin = round(yoffset+1);fprintf(['ybegin = ',num2str(ybegin)]);fprintf('\n');
yend = round(yoffset+size(onion,1));fprintf(['yend = ',num2str(yend)]);fprintf('\n');
% extract region from peppers and compare to onion
extracted_onion = peppers(ybegin:yend,xbegin:xend,:);
if isequal(onion,extracted_onion)
disp('pattern103.jpg was extracted from rsz_org103.jpg')
end
recovered_onion = uint8(zeros(size(peppers)));
recovered_onion(ybegin:yend,xbegin:xend,:) = onion;
figure, imshow(recovered_onion)
[m,n,p] = size(peppers);
mask = ones(m,n);
i = find(recovered_onion(:,:,1)==0);
mask(i) = .2; % try experimenting with different levels of
% transparency
% overlay images with transparency
figure, imshow(peppers(:,:,1)) % show only red plane of peppers
hold on
h = imshow(recovered_onion); % overlay recovered_onion
set(h,'AlphaData',mask)
I have read in an image file to MATLAB and I am trying to stretch it in one direction, but a variable amount (sinusoidal). This would create an accordion affect on the image. I have toyed around with imresize, however that only resizes the image linearly. I would like the amount of "stretch" to vary for each image line. I tried to convey this with the following code:
periods = 10; % Number of "stretch" cycles
sz = size(original_image,2)/periods;
s = 0;
x = 0;
for index = 1:periods
B = original_image(:,round(s+1:s+sz));
if mod(index,2) == 0
amp = 1.5;
else
amp = 0.75;
end
xi = size(B,2)*amp;
new_image(:,x+1:x+xi) = imresize(B, [size(B,1) size(B,2)*amp]);
s = s + sz;
x = x+xi;
end
You can see that segments of the image are stretched, then compressed, then stretched, etc, like an accordion. However, each segment has a uniform amount of stretch, whereas I'd like it to be increasing then decreasing as you move along the image.
I have also looked at MATLAB's example of Applying a Sinusoidal Transformation to a Checkerboard which seems very applicable to my problem, however I have been trying and I cannot get this to produce the desired result for my image.
Any help is much appreciated.
UPDATE:
Thank you for Answer #1. I was unable to get it to work for me, but also realized it would resulted in loss of data, as the code only called for certian lines in the original image, and other lines would have been ignored.
After experimenting further, I developed the code below. I used a checkerboard as an example. While combersome, it does get the job done. However, upon trying the script with an actual high-resolution image, it was extremely slow and ended up failing due to running out of memory. I believe this is because of the excessive number of "imresize" commands that are used in loop.
I = checkerboard(10,50);
I = imrotate(I,90);
[X Y] = size(I);
k = 4; % Number of "cycles"
k = k*2;
x = 1;
y = 2;
z = 2;
F = [];
i = 1;
t = 0;
s = 0;
for j = 1:k/2
t = t + 1;
for inc = round(s+1):round(Y/k*t)
Yi = i + 1;
F(:,(x:y)) = imresize(I(:,(inc:inc)),[X Yi]);
x = y + 1;
y = x + z;
z = z + 1;
i = i + 1;
end
y = y - 2;
z = z - 4;
for inc = round(Y/k*t+1):round(Y/k*(t+1))
Yi = i - 1;
F(:,(x:y)) = imresize(I(:,(inc:inc)),[X Yi]);
x = y + 1;
y = x + z;
z = z - 1;
i = i - 1;
end
y = y + 2;
z = z + 4;
s = Y/k*(t+1);
t = t + 1;
end
Fn = imresize(F, [X Y]);
imshow(Fn);
Does anyone know of a simpler way to achieve this? If you run the code above, you can see the effect I am trying to achieve. Unfortunately, my method above does not allow me to adjust the amplitude of the "stretch" either, only the number of "cycles," or frequency. Help on this would also be appreciated. Much thanks!
Here is how I would approach it:
Determine how the coordinate of each point in your Final image F maps into your Initial image I of size (M,N)
Since you want to stretch horizontally only, given a point (xF,yF) in your final image, that point would be (xI,yI) in your initial image where xI and yI can be obtained as follows:
yI = yF;
xI = xF + Lsin(xFK);
Notes:
these equations do not guarantee that xI remains within the range [1:N] so cropping needs to be added
K controls the how many wrinkles you want to have in your accordion effect. For example, if you only want one wrinkle, K would be 2*pi/N
L controls how much stretching you want to apply
Then simply express your image F from image I with the transforms you have in 1.
Putting it all together, the code below creates a sample image I and generates the image F as follows:
% Generate a sample input image
N=500;
xF=1:N;
I=(1:4)'*xF/N*50;
% Set the parameters for your accordion transform
K=2*pi/N;
L=100;
% Apply the transform
F=I(:, round(min(N*ones(1,N), max(ones(1,N), (xF + L*sin(xF*K))))) );
% Display the input and output images side by side
image(I);
figure;
image(F);
If you run this exact code you get:
As you can see, the final image on the right stretches the center part of the image on the left, giving you an accordion effect with one wrinkle.
You can fiddle with K and L and adjust the formula to get the exact effect you want, but note how by expressing the transform in a matrix form MATLAB executes the code in a fraction of second. If there is one take away for you is that you should stay away from for loops and complex processing whenever you can.
Have fun!
I have two similar images: [A] and [B] (please see images). They are offset in X and Y. How to align A over B, using an pixel from A as reference? In other words, locating the indicated pixel from A on B, and make A and B centralized in this pixel.
Thank you.
Final result make manually
you can do it manually:
img1 = 255-mean(imread('a1.png'),3);
img2 = 255-mean(imread('a2.png'),3);
subplot(221);imagesc(img1);axis image
[x1 y1] = ginput(1);
subplot(222);imagesc(img2);axis image
[x2 y2] = ginput(1);
x = x1-x2;
y = y1-y2;
T = maketform('affine',[1 0 x;0 1 y; 0 0 1]');
img2N = imtransform(img2,T,'xdata',[1 size(img1,2)],'ydata',[1 size(img1,1)]);
subplot(2,2,[3 4]);
imagesc(max(img1,img2N));axis image
for doing it automaticly, you can do this::
%size(img2) <= size(img1)
img1 = 255-mean(imread('a1.png'),3);
img2 = 255-mean(imread('a2.png'),3);
subplot(221);imagesc(img1);axis image
subplot(222);imagesc(img2);axis image
colormap(gray(256))
c = normxcorr2(img2,img1);
[y x] = find(c==max(c(:)));
y = y-size(img2,1);
x = x-size(img2,2);
T = maketform('affine',[1 0 x;0 1 y; 0 0 1]');
img2N = imtransform(img2,T,'xdata',[1 size(img1,2)],'ydata',[1 size(img1,1)]);
subplot(2,2,[3 4]);
imagesc(max(img1,img2N));axis image
I think what you want is image registration, which requires, in your case, at least 2 control points, because it's affine transformation without reflect. Given the similarity of those 2 images, I think it's easy to find another referral point. After that you can use imtransform or simply cp2tform to perform the registration.
You will need to fine tune the 'XData' and 'YData' properties but you could do this...
rgbA = imread('A.jpg'):
rgbB = imread('B.jpg');
alpha(.2)
image(rgbA,'XData',2)
alpha(.2)
hold on
image(rgbB,'XData',2)
alpha(.2)