Related
I have 2 greyscale images that i am trying to align using scalar scaling 1 , rotation matrix [2,2] and translation vector [2,1]. I can calculate image1's transformed coordinates as
y = s*R*x + t;
Below the resulting images are shown.
The first image is image1 before transformation,
the second image is image1 (red) with attempted interpolation using interp2 shown on top of image2 (green)
The third image is when i manually insert the pixel values from image1 into an empty array (that has the same size as image2) using the transformed coordinates.
From this we can see that the coordinate transformation must have been successful, as the images are aligned although not perfectly (which is to be expected since only 2 coordinates were used in calculating s, R and t) .
How come interp2 is not producing a result more similar to when i manually insert pixel values?
Below the code for doing this is included:
Interpolation code
function [transformed_image] = interpolate_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
% doesn't help if i use get_grid that the other function is using here
[~, grid_xr, grid_yr] = get_ipgrid(im_r);
[x_t, grid_xt, grid_yt] = get_ipgrid(im_t);
y = s*R*x_t + t;
yx = reshape(y(1,:), m,n);
yy = reshape(y(2,:), m,n);
transformed_image = interp2(grid_xr, grid_yr, im_r, yx, yy, 'nearest');
end
function [x, grid_x, grid_y] = get_ipgrid(image)
[m,n] = size(image);
[grid_x,grid_y] = meshgrid(1:n,1:m);
x = [reshape(grid_x, 1, []); reshape(grid_y, 1, [])]; % X is [2xM*N] coordinate pairs
end
The manual code
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(round(yx(i,j)),round(yy(i,j))) = im_r(i,j);
end
end
end
end
function [x, grid_x, grid_y] = get_grid(image)
[m,n] = size(image);
[grid_y,grid_x] = meshgrid(1:n,1:m);
x = [grid_x(:) grid_y(:)]'; % X is [2xM*N] coordinate pairs
end
Can anyone see what i'm doing wrong with interp2? I feel like i have tried everything
Turns out i got interpolation all wrong.
In my question i calculate the coordinates of im1 in im2.
However the way interpolation works is that i need to calculate the coordinates of im2 in im1 such that i can map the image as shown below.
This means that i also calculated the wrong s,R and t since they were used to transform im1 -> im2, where as i needed im2 -> im1. (this is also called the inverse transform). Below is the manual code, that is basically the same as interp2 with nearest neighbour interpolation
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(i,j) = im_r(round(yx(i,j)),round(yy(i,j)));
end
end
end
end
I'm trying to perform an image rotation without embedded Matlab's function.
But I'm still getting this error:
Error using .'
Transpose on ND array is not defined. Use PERMUTE instead.
Error in interp2 (line 130)
V = V.';
But I don't know why there is such a mistake and likewise I don't know how to customize the function interp2 or PERMUTE to make it functional (I have used help in Matlab).
Could you please help to customize the code?
Thanks in advance!
clc; clear all; close all;
input_image = imread('mri.png');
Z = double(input_image);
Size = size(Z);
[X,Y] = meshgrid(1:Size(2), 1:Size(1));
%Center of an image
c = Size(end:-1:1)/2;
%Angle of rotation
angle = 45;
t = angle*pi/180;
%Making the rotation
ct = cos(t);
st = sin(t);
Xi = c(1) + ct*(X - c(1)) - st*(Y - c(2));
Yi = c(2) + st*(X - c(1)) + ct*(Y - c(2));
%Interpolation
Zi = interp2(X, Y, Z, Xi, Yi);
figure()
subplot(121); imshow(I); title('Original image');
subplot(122); imshow(uint8(Zi)); title('Rotated image without embedded
function');
Z is a 3D matrix and interp2 only works for 2D matrices. So you have to do the interpolation for each colour separately, and recombine them:
%Interpolation
Zir = interp2(X, Y, Z(:,:,1), Xi, Yi);
Zig = interp2(X, Y, Z(:,:,2), Xi, Yi);
Zib = interp2(X, Y, Z(:,:,3), Xi, Yi);
Zi = cat(3, Zir, Zig, Zib);
I have a simple program which replaces a selected region of one image with the corresponding region in another image. I am trying to use imrect() in conjunction with makeConstrainToRectFcn to select a rectangular ROI which cannot be extended beyond the boundaries of the image.
However, when I run the code, the ROI can initially be drawn to include the areas outside the image frame. This leads to the error: Index exceeds matrix dimensions.
Is there any way that the rectangle cannot be drawn outside the image from the outset? Alternatively, is it possible to ensure that the operation does not execute unless the rectangle is constrained within the axes limits?
Any suggestions would be greatly appreciated.
My code:
% Sample images:
X=imread('office_1.jpg');
Y=imread('office_5.jpg');
figure, imshow(X)
h = imrect;
api = iptgetapi(h);
fcn = makeConstrainToRectFcn('imrect',get(gca,'XLim'),...
get(gca,'YLim'));
api.setPositionConstraintFcn(fcn);
wait(h);
rect = getPosition(h);
x1 =rect(1);
x2 = x1 + rect(3);
y1 =rect(2);
y2 = y1 + rect(4);
Z = X; % Initialize
Z(y1:y2, x1:x2, :) = Y(y1:y2, x1:x2, :);
imshow(Z)
This should do the job:
% Sample images:
X = imread('office_1.jpg');
Y = imread('office_5.jpg');
% Show image X:
figure, imshow(X);
% Define the ROI constraint:
h = imrect();
h.setPositionConstraintFcn(#(p) roi_constraint(p,size(X)));
% Wait for the ROI to be confirmed:
roi = round(wait(h));
x1 = roi(1);
x2 = x1 + roi(3);
y1 = roi(2);
y2 = y1 + roi(4);
% Create the final image Z and display it:
Z = X;
Z(y1:y2,x1:x2,:) = Y(y1:y2,x1:x2,:);
imshow(Z);
% Auxiliary function for ROI constraint:
function p_adj = roi_constraint(p,img_size)
p_adj(1) = max([1 p(1)]);
p_adj(2) = max([1 p(2)]);
p_adj(3) = min([(img_size(2) - 1) p(3)]);
p_adj(4) = min([(img_size(1) - 1) p(4)]);
end
The script has been tested under Matlab 2017a and works as expected. As you can see, the main difference is the way the size constraint is being handled: in your case, it wasn't properly applied before wait was hit, thus returning an invalid rectangle. Also, in order to avoid a wrong offsetting, the round function has been applied to the rectangle.
I'm trying to convert an image with many circles with the same center, from Cartesian to Polar (so that the new image will be the circles but lines instead of the circles, see the image below), and that's working out just fine using the following code:
[r, c] = size(img);
r=floor(r/2);
c=floor(c/2);
[X, Y] = meshgrid(-c:c-1,-r:r-1);
[theta, rho] = cart2pol(X, Y);
subplot(221), imshow(img), axis on;
hold on;
subplot(221), plot(xCenter,yCenter, 'r+');
subplot(222), warp(theta, rho, zeros(size(theta)), img);
view(2), axis square;
The problem is, I don't understand why does it even work? (obviously it's not my code), I mean, when I use the function cart2pol I don't even use the image, it's just some vectors x and y generated from the meshgrid function..
and another problem is, I want somehow to have a new image (not just to be able to draw it with the wrap function) which is the original image but by the theta and rho coordinates (meaning the same pixels but rearranged)... I'm not even sure how to ask this, in the end I want to have an image which is a matrix so that I can sum each row and turn the matrix into a column vector...
You can think of your image as being a 2D matrix, where each pixel has an X and Y coordinate
[(1,1) (1,2) (1,3) .... (1,c)]
[(2,1) (2,2) (2,3) .... (2,c)]
[(3,1) (3,2) (3,3) .... (3,c)]
[.... .... .... .... .... ]
[(r,1) (r,2) (r,3) .... (r,c)]
In the code that you posted, it maps each of these (X,Y) coordinates to it's equivalent polar coordinate (R, theta) using the center of the image floor(c/2) and floor(r/2) as the reference point.
% Map pixel value at (1,1) to it's polar equivalent
[r,theta] = cart2pol(1 - floor(r/2),1 - floor(c/2));
So whatever pixel value was used for (1,1) should now appear in your new polar coordinate space at (r,theta). It is important to note that to do this conversion, no information about the actual pixel values in the image matters, rather we just want to perform this transformation for each pixel within the image.
So first we figure out where the center of the image is:
[r, c] = size(img);
r = floor(r / 2);
c = floor(c / 2);
Then we figure out the (X,Y) coordinates for every point in the image (after the center has already been subtracted out
[X, Y] = meshgrid(-c:c-1,-r:r-1);
Now convert all of these cartesian points to polar coordinates
[theta, rho] = cart2pol(X, Y);
All that warp now does, is say "display the value of img at (X,Y) at it's corresponding location in (theta, rho)"
warp(theta, rho, zeros(size(theta)), img);
Now it seems that you want a new 2D image where the dimensions are [nTheta, nRho]. To do this, you could use griddata to interpolate your scattered (theta, rho) image (which is displayed by warp above) to a regular grid.
% These is the spacing of your radius axis (columns)
rhoRange = linspace(0, max(rho(:)), 100);
% This is the spacing of your theta axis (rows)
thetaRange = linspace(-pi, pi, 100);
% Generate a grid of all (theta, rho) coordinates in your destination image
[T,R] = meshgrid(thetaRange, rhoRange);
% Now map the values in img to your new image domain
theta_rho_image = griddata(theta, rho, double(img), T, R);
Take a look at all the interpolation methods for griddata to figure out which is most appropriate for your scenario.
There were a couple other issues (like the rounding of the center) which caused the result to be slightly incorrect. A fully working example is provided below
% Create an image of circles
radii = linspace(0, 40, 10);
rows = 100;
cols = 100;
img = zeros(rows, cols);
for k = 1:numel(radii)
t = linspace(0, 2*pi, 1000);
xx = round((cos(t) * radii(k)) + (cols / 2));
yy = round((sin(t) * radii(k)) + (rows / 2));
toremove = xx > cols | xx < 1 | yy > rows | yy < 1;
inds = sub2ind(size(img), xx(~toremove), yy(~toremove));
img(inds) = 1;
end
[r,c] = size(img);
center_row = r / 2;
center_col = c / 2;
[X,Y] = meshgrid((1:c) - center_col, (1:r) - center_row);
[theta, rho] = cart2pol(X, Y);
rhoRange = linspace(0, max(rho(:)), 1000);
thetaRange = linspace(-pi, pi, 1000);
[T, R] = meshgrid(thetaRange, rhoRange);
theta_rho_image = griddata(theta, rho, double(img), T, R);
figure
subplot(1,2,1);
imshow(img);
title('Original Image')
subplot(1,2,2);
imshow(theta_rho_image);
title('Polar Image')
And the result
I have two similar images: [A] and [B] (please see images). They are offset in X and Y. How to align A over B, using an pixel from A as reference? In other words, locating the indicated pixel from A on B, and make A and B centralized in this pixel.
Thank you.
Final result make manually
you can do it manually:
img1 = 255-mean(imread('a1.png'),3);
img2 = 255-mean(imread('a2.png'),3);
subplot(221);imagesc(img1);axis image
[x1 y1] = ginput(1);
subplot(222);imagesc(img2);axis image
[x2 y2] = ginput(1);
x = x1-x2;
y = y1-y2;
T = maketform('affine',[1 0 x;0 1 y; 0 0 1]');
img2N = imtransform(img2,T,'xdata',[1 size(img1,2)],'ydata',[1 size(img1,1)]);
subplot(2,2,[3 4]);
imagesc(max(img1,img2N));axis image
for doing it automaticly, you can do this::
%size(img2) <= size(img1)
img1 = 255-mean(imread('a1.png'),3);
img2 = 255-mean(imread('a2.png'),3);
subplot(221);imagesc(img1);axis image
subplot(222);imagesc(img2);axis image
colormap(gray(256))
c = normxcorr2(img2,img1);
[y x] = find(c==max(c(:)));
y = y-size(img2,1);
x = x-size(img2,2);
T = maketform('affine',[1 0 x;0 1 y; 0 0 1]');
img2N = imtransform(img2,T,'xdata',[1 size(img1,2)],'ydata',[1 size(img1,1)]);
subplot(2,2,[3 4]);
imagesc(max(img1,img2N));axis image
I think what you want is image registration, which requires, in your case, at least 2 control points, because it's affine transformation without reflect. Given the similarity of those 2 images, I think it's easy to find another referral point. After that you can use imtransform or simply cp2tform to perform the registration.
You will need to fine tune the 'XData' and 'YData' properties but you could do this...
rgbA = imread('A.jpg'):
rgbB = imread('B.jpg');
alpha(.2)
image(rgbA,'XData',2)
alpha(.2)
hold on
image(rgbB,'XData',2)
alpha(.2)