Error trying to rotate an image in Matlab using INTERP2 funcion - image

I'm trying to perform an image rotation without embedded Matlab's function.
But I'm still getting this error:
Error using .'
Transpose on ND array is not defined. Use PERMUTE instead.
Error in interp2 (line 130)
V = V.';
But I don't know why there is such a mistake and likewise I don't know how to customize the function interp2 or PERMUTE to make it functional (I have used help in Matlab).
Could you please help to customize the code?
Thanks in advance!
clc; clear all; close all;
input_image = imread('mri.png');
Z = double(input_image);
Size = size(Z);
[X,Y] = meshgrid(1:Size(2), 1:Size(1));
%Center of an image
c = Size(end:-1:1)/2;
%Angle of rotation
angle = 45;
t = angle*pi/180;
%Making the rotation
ct = cos(t);
st = sin(t);
Xi = c(1) + ct*(X - c(1)) - st*(Y - c(2));
Yi = c(2) + st*(X - c(1)) + ct*(Y - c(2));
%Interpolation
Zi = interp2(X, Y, Z, Xi, Yi);
figure()
subplot(121); imshow(I); title('Original image');
subplot(122); imshow(uint8(Zi)); title('Rotated image without embedded
function');

Z is a 3D matrix and interp2 only works for 2D matrices. So you have to do the interpolation for each colour separately, and recombine them:
%Interpolation
Zir = interp2(X, Y, Z(:,:,1), Xi, Yi);
Zig = interp2(X, Y, Z(:,:,2), Xi, Yi);
Zib = interp2(X, Y, Z(:,:,3), Xi, Yi);
Zi = cat(3, Zir, Zig, Zib);

Related

Interp2 of image with transformed coordinates

I have 2 greyscale images that i am trying to align using scalar scaling 1 , rotation matrix [2,2] and translation vector [2,1]. I can calculate image1's transformed coordinates as
y = s*R*x + t;
Below the resulting images are shown.
The first image is image1 before transformation,
the second image is image1 (red) with attempted interpolation using interp2 shown on top of image2 (green)
The third image is when i manually insert the pixel values from image1 into an empty array (that has the same size as image2) using the transformed coordinates.
From this we can see that the coordinate transformation must have been successful, as the images are aligned although not perfectly (which is to be expected since only 2 coordinates were used in calculating s, R and t) .
How come interp2 is not producing a result more similar to when i manually insert pixel values?
Below the code for doing this is included:
Interpolation code
function [transformed_image] = interpolate_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
% doesn't help if i use get_grid that the other function is using here
[~, grid_xr, grid_yr] = get_ipgrid(im_r);
[x_t, grid_xt, grid_yt] = get_ipgrid(im_t);
y = s*R*x_t + t;
yx = reshape(y(1,:), m,n);
yy = reshape(y(2,:), m,n);
transformed_image = interp2(grid_xr, grid_yr, im_r, yx, yy, 'nearest');
end
function [x, grid_x, grid_y] = get_ipgrid(image)
[m,n] = size(image);
[grid_x,grid_y] = meshgrid(1:n,1:m);
x = [reshape(grid_x, 1, []); reshape(grid_y, 1, [])]; % X is [2xM*N] coordinate pairs
end
The manual code
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(round(yx(i,j)),round(yy(i,j))) = im_r(i,j);
end
end
end
end
function [x, grid_x, grid_y] = get_grid(image)
[m,n] = size(image);
[grid_y,grid_x] = meshgrid(1:n,1:m);
x = [grid_x(:) grid_y(:)]'; % X is [2xM*N] coordinate pairs
end
Can anyone see what i'm doing wrong with interp2? I feel like i have tried everything
Turns out i got interpolation all wrong.
In my question i calculate the coordinates of im1 in im2.
However the way interpolation works is that i need to calculate the coordinates of im2 in im1 such that i can map the image as shown below.
This means that i also calculated the wrong s,R and t since they were used to transform im1 -> im2, where as i needed im2 -> im1. (this is also called the inverse transform). Below is the manual code, that is basically the same as interp2 with nearest neighbour interpolation
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(i,j) = im_r(round(yx(i,j)),round(yy(i,j)));
end
end
end
end

How to blur an image in one specific direction in Matlab?

I have an image and I would like to blur it in one specific direction and distance using Matlab.
I found out there is a filter called fspecial('motion',len,theta).
Here there is an example:
I = imread('cameraman.tif');
imshow(I);
H = fspecial('motion',20,45);
MotionBlur = imfilter(I,H,'replicate');
imshow(MotionBlur);
However the blurred picture is blurred in 2 directions! In this case 225 and 45 degrees.
What should it do in order to blur it just in a specific direction (e.g. 45) and not both?
I think you want what's called a "comet" kernel. I'm not sure what kernel is used for the "motion" blur, but I'd guess that it's symmetrical based on the image you provided.
Here is some code to play with that applies the comet kernel in one direction. You'll have to change things around if you want an arbitrary angle. You can see from the output that it's smearing in one direction, since there is a black band on only one side (due to the lack of pixels there).
L = 5; % kernel width
sigma=0.2; % kernel smoothness
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
H1 = exp((-sigma.*X.^2)+(-sigma.*Y.^2));
kernel = H1/sum((H1(:)));
Hflag = double((X>0));
comet_kernel = Hflag.*H1;
comet_kernel=comet_kernel/sum(comet_kernel(:));
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Updated code: This will apply an arbitrary rotation to the comet kernel. Note also the difference between sigma in the previous example and sx and sy here, which control the length and width parameters of the kernel, as suggested by Andras in the comments.
L = 5; % kernel width
sx=3;
sy=10;
theta=0;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)))
smearedImage = conv2(double(I),comet_kernel,'same');
imshow(smearedImage,[]);
Based on Anger Density's answer I wrote this code that solves my problem completely:
L = 10; % kernel width
sx=0.1;
sy=100;
THETA = ([0,45,90,135,180,225,270,320,360])*pi/180;
for i=1:length(THETA)
theta=(THETA(i)+pi)*-1;
I = imread('cameraman.tif');
x = -L:1.0:L;
[X,Y] = meshgrid(x,x);
rX = X.*cos(theta)-Y.*sin(theta);
rY = X.*sin(theta)+Y.*cos(theta);
H1 = exp(-((rX./sx).^2)-((rY./sy).^2));
Hflag = double((0.*rX+rY)>0);
H1 = H1.*Hflag;
comet_kernel = H1/sum((H1(:)));
smearedImage = conv2(double(I),comet_kernel,'same');
% Fix edges
smearedImage(:,[1:L, end-L:end]) = I(:,[1:L, end-L:end]); % Left/Right edge
smearedImage([1:L, end-L:end], :) = I([1:L, end-L:end], :); % Top/bottom edge
% Keep only inner blur
smearedImage(L:end-L,L:end-L) = min(smearedImage(L:end-L,L:end-L),double(I(L:end-L,L:end-L)));
figure
imshow(smearedImage,[]);
title(num2str(THETA(i)*180/pi))
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
end

How to write an if else statement in matrix matlab

I am trying to read an image and get the x,y and pixel coordinates. It is an RGB image which has a size (282,282,3). However I get pixel coordinates matrix of (282*3, 282, pixel values). Further, though the if else condition is working for normal given values, in this code, it is not working. Can anyone help me find where I went wrong?
clear all
clc
A = double(imread('F:\02.jpg'));
size(A)
[length, width] = size(A);
[x, y] = meshgrid(1:width, 1:length);
z(:) = A(:)/255;
if (z >=0.50000)
z =1;
elseif(z <0.50000)
z=0;
end
Z = z(:)
As z is not a scalar but matrix or vector a logical comparison z >= val also results in a matrix/vector (of mixed ones and zeros). What you can do is to use this result as indexing, for example
ix = z >= 0.5;
z( ix) = 1;
z(~ix) = 0;

convert an image from Cartesian to Polar

I'm trying to convert an image with many circles with the same center, from Cartesian to Polar (so that the new image will be the circles but lines instead of the circles, see the image below), and that's working out just fine using the following code:
[r, c] = size(img);
r=floor(r/2);
c=floor(c/2);
[X, Y] = meshgrid(-c:c-1,-r:r-1);
[theta, rho] = cart2pol(X, Y);
subplot(221), imshow(img), axis on;
hold on;
subplot(221), plot(xCenter,yCenter, 'r+');
subplot(222), warp(theta, rho, zeros(size(theta)), img);
view(2), axis square;
The problem is, I don't understand why does it even work? (obviously it's not my code), I mean, when I use the function cart2pol I don't even use the image, it's just some vectors x and y generated from the meshgrid function..
and another problem is, I want somehow to have a new image (not just to be able to draw it with the wrap function) which is the original image but by the theta and rho coordinates (meaning the same pixels but rearranged)... I'm not even sure how to ask this, in the end I want to have an image which is a matrix so that I can sum each row and turn the matrix into a column vector...
You can think of your image as being a 2D matrix, where each pixel has an X and Y coordinate
[(1,1) (1,2) (1,3) .... (1,c)]
[(2,1) (2,2) (2,3) .... (2,c)]
[(3,1) (3,2) (3,3) .... (3,c)]
[.... .... .... .... .... ]
[(r,1) (r,2) (r,3) .... (r,c)]
In the code that you posted, it maps each of these (X,Y) coordinates to it's equivalent polar coordinate (R, theta) using the center of the image floor(c/2) and floor(r/2) as the reference point.
% Map pixel value at (1,1) to it's polar equivalent
[r,theta] = cart2pol(1 - floor(r/2),1 - floor(c/2));
So whatever pixel value was used for (1,1) should now appear in your new polar coordinate space at (r,theta). It is important to note that to do this conversion, no information about the actual pixel values in the image matters, rather we just want to perform this transformation for each pixel within the image.
So first we figure out where the center of the image is:
[r, c] = size(img);
r = floor(r / 2);
c = floor(c / 2);
Then we figure out the (X,Y) coordinates for every point in the image (after the center has already been subtracted out
[X, Y] = meshgrid(-c:c-1,-r:r-1);
Now convert all of these cartesian points to polar coordinates
[theta, rho] = cart2pol(X, Y);
All that warp now does, is say "display the value of img at (X,Y) at it's corresponding location in (theta, rho)"
warp(theta, rho, zeros(size(theta)), img);
Now it seems that you want a new 2D image where the dimensions are [nTheta, nRho]. To do this, you could use griddata to interpolate your scattered (theta, rho) image (which is displayed by warp above) to a regular grid.
% These is the spacing of your radius axis (columns)
rhoRange = linspace(0, max(rho(:)), 100);
% This is the spacing of your theta axis (rows)
thetaRange = linspace(-pi, pi, 100);
% Generate a grid of all (theta, rho) coordinates in your destination image
[T,R] = meshgrid(thetaRange, rhoRange);
% Now map the values in img to your new image domain
theta_rho_image = griddata(theta, rho, double(img), T, R);
Take a look at all the interpolation methods for griddata to figure out which is most appropriate for your scenario.
There were a couple other issues (like the rounding of the center) which caused the result to be slightly incorrect. A fully working example is provided below
% Create an image of circles
radii = linspace(0, 40, 10);
rows = 100;
cols = 100;
img = zeros(rows, cols);
for k = 1:numel(radii)
t = linspace(0, 2*pi, 1000);
xx = round((cos(t) * radii(k)) + (cols / 2));
yy = round((sin(t) * radii(k)) + (rows / 2));
toremove = xx > cols | xx < 1 | yy > rows | yy < 1;
inds = sub2ind(size(img), xx(~toremove), yy(~toremove));
img(inds) = 1;
end
[r,c] = size(img);
center_row = r / 2;
center_col = c / 2;
[X,Y] = meshgrid((1:c) - center_col, (1:r) - center_row);
[theta, rho] = cart2pol(X, Y);
rhoRange = linspace(0, max(rho(:)), 1000);
thetaRange = linspace(-pi, pi, 1000);
[T, R] = meshgrid(thetaRange, rhoRange);
theta_rho_image = griddata(theta, rho, double(img), T, R);
figure
subplot(1,2,1);
imshow(img);
title('Original Image')
subplot(1,2,2);
imshow(theta_rho_image);
title('Polar Image')
And the result

Why is my cube distorted?

Using a quaternion, if I rotate my cube along an axis by 90 degrees, I get a different front facing cube side, which appears as a straight-on square of a solid color. My cube has different colored sides, so changing the axis it is rotated along gives me these different colors as expected.
When I try to rotate by an arbitrary amount, I get quite the spectacular mess, and I don't know why since I'd expect the quaternion process to work well regardless of the angle:
I am creating a quaternion from 2 vectors using this:
inline QuaternionT<T> QuaternionT<T>::CreateFromVectors(const Vector3<T>& v0, const Vector3<T>& v1)
{
if (v0 == -v1)
return QuaternionT<T>::CreateFromAxisAngle(vec3(1, 0, 0), Pi);
Vector3<T> c = v0.Cross(v1);
T d = v0.Dot(v1);
T s = std::sqrt((1 + d) * 2);
QuaternionT<T> q;
q.x = c.x / s;
q.y = c.y / s;
q.z = c.z / s;
q.w = s / 2.0f;
return q;
}
I think the above method is fine since I've seen plenty of sample code correctly using it.
With the above method, I do this:
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,0,1));
It works, and it is a 90-degree rotation.
But suppose I want more like a 45-degree rotation?
Quaternion quat1=Quaternion::CreateFromVectors(vec3(0,1,0), vec3(0,1,1));
This gives me the mess above. I also tried normalizing quat1 which provides different though similarly distorted results.
I am using the quaternion as a Modelview rotation matrix, using this:
inline Matrix3<T> QuaternionT<T>::ToMatrix() const
{
const T s = 2;
T xs, ys, zs;
T wx, wy, wz;
T xx, xy, xz;
T yy, yz, zz;
xs = x * s; ys = y * s; zs = z * s;
wx = w * xs; wy = w * ys; wz = w * zs;
xx = x * xs; xy = x * ys; xz = x * zs;
yy = y * ys; yz = y * zs; zz = z * zs;
Matrix3<T> m;
m.x.x = 1 - (yy + zz); m.y.x = xy - wz; m.z.x = xz + wy;
m.x.y = xy + wz; m.y.y = 1 - (xx + zz); m.z.y = yz - wx;
m.x.z = xz - wy; m.y.z = yz + wx; m.z.z = 1 - (xx + yy);
return m;
}
Any idea what's going on here?
What does your frustum look like? If you have a distorted "lens" such as an exceptionally wide-angle field of view, then angles that actually show the depth, such as an arbitrary rotation, might not look as you expect. (Just like how a fisheye lens on a camera makes perspective look unrealistic).
Make sure you are using a realistic frustum if you want to see realistic images.

Resources