Matlab: Extract Image in polar representation from Cartesian - image

I m trying to compute an efficient way to transform an image in cartesian coordinates into a polar representation. I know some functions such as ImToPolar are doing it and it works perfectly but takes a considerable much time for big images, especially when they require to be processed back and forth.
Here´s my input image:
and then I generate a polar mesh using a cartesian mesh centered at 0 and the function cart2pol(). Finally, I plot my image using mesh(theta, r, Input).
And here´s what I obtain:
Its exactly the image I need and it´s the same as ImToPolar or maybe better.
Since MATLAB knows how to compute it, does anybody know how to extract a matrix in polar representation from this output? Or maybe a fast (like in fast fourier transform) way to compute a Polar transform (and inverse) on MATLAB?

pol2cart and meshgrid and interp2 are sufficient to create the result:
I=imread('http://i.stack.imgur.com/HYSyb.png');
[r, c,~] = size(I);
%rgb image can be converted to indexed image to prevent excessive copmutation for each color
[idx, mp] = rgb2ind(I,32);
% add offset to image coordinates
x = (1:c)-(c/2);
y = (1:r)-(r/2);
% create distination coordinates in polar form so value of image can be interpolated in those coordinates
% angle ranges from 0 to 2 * pi and radius assumed that ranges from 0 to 400
% linspace(0,2*pi, 200) leads to a stretched image try it!
[xp yp] = meshgrid(linspace(0,2*pi), linspace(0,400));
%translate coordinate from polar to image coordinates
[xx , yy] = pol2cart(xp,yp);
% interpolate pixel values for unknwon coordinates
out = interp2(x, y, idx, xx, yy);
% save the result to a file
imwrite(out, mp, 'result.png')

Related

Image Repetition from Binary to Cartesian

I'd like to take in an RGB image, find the points in the image that are white, and get the cartesian coordinates of those points in the image. I've gotten most of the way there, but when I try to plot the cartesian coordinates, I get a vertically tiled image (i.e. 5 overlapped copies of what I should see). Anyone know what could be causing this?
,
Code: (JPG comes in as 2448 x x3264 x 3 uint8)
I = imread('IMG_0245.JPG');
imshow(I); % display unaltered image
% Convert image to grayscale
I = rgb2gray(I);
% Convert image to binary (black/white)
I = im2bw(I, 0.9);
% Generate cartesian coordinates of image
imageSize = size(I);
[x, y] = meshgrid( 1:imageSize(1), 1:imageSize(2) );
PerspectiveImage = [x(:), y(:), I(:)];
% Get indices of white points only
whiteIndices = find(PerspectiveImage(:,3));
figure; plot( PerspectiveImage(whiteIndices, 1), PerspectiveImage(whiteIndices, 2),'.');
% Flip vertically to correct indexing vs. plotting issue
axis ij
Very simple. You're declaring your meshgrid wrong. It should be:
[x, y] = meshgrid( 1:imageSize(2), 1:imageSize(1) );
The first parameter denotes the horizontal extents of the 2D grid, and so you want to make this vary for as many columns as you have. Similarly, the second parameter denotes the vertical extents of the 2D grid, and so you want to make this for as many rows as you have.
I had to pre-process some of your image to get some good results because your original image had a large white border surrounding the image. I had to remove this border by removing all pure white pixels. I also read in the image directly from StackOverflow:
I = imread('http://s7.postimg.org/ovb53w4ff/Track_example.jpg');
mask = all(I == 255, 3);
I = bsxfun(#times, I, uint8(~mask));
This was the image I get after doing my pre-processing:
Once I do this and change your meshgrid call, I get this:

How to rescale the intensity range of a grayscale 3 dimension image (x,y,z) using Matlab

I can't find information online about the intensity rescaling of a 3D image made of several 2D images.
I'm looking for the same function as imadjust which only works for 2D images.
My 3D image is the combination of 2D images stacked together but I have to process the 3D image and not the 2D images one by one.
I can't loop imadjust because I want to process the images as one, to consider all the information available, in all directions.
For applying imadjust for set of 2D grayscale images taking the whole value into account, this trick might work
a = imread('pout.tif');
a = imresize(a,[256 256]); %// re-sizing to match image b's dimension
b = imread('cameraman.tif');
Im = cat(3,a,b);
%//where a,b are separate grayscale images of same dimensions
%// if you have the images separately you could edit this line to
%// Im = cat(2,a,b);
%// and also avoid the next step
%// reshaping into a 2D matrix to apply imadjust
Im = reshape(Im,size(Im,1),[]);
out = imadjust(Im); %// applying imadjust
%// finally reshaping back to its original shape
out = reshape(out,size(a,1),size(a,2),[]);
To check:
x = out(:,:,1);
y = out(:,:,2);
As you could see from the Workspace image, the first image (variable x) is not re-scaled to 0-255 as its previous range (variable a) was not near the 0 point.
WorkSpace:
Edit: You could do this as a one-step process like this: (as the other answer suggests)
%// reshaping to single column using colon operator and then using imadjust
%// then reshaping it back
out = reshape(imadjust(Image3D(:)),size(Image3D));
Edit2:
As you have image as cell arrays in I2, try this:
I2D = cat(2,I2{:})
The only way to do this for 3D image is to treat the data as a vector and then reshape back.
Something like this:
%create a random 3D image.
x = rand(10,20,30);
%adjust intensity range
x_adj = imadjust( x(:), size(x) );

Vector decomposition in matlab

this is my situation: I have a 30x30 image and I want to calculate the radial and tangent component of the gradient of each point (pixel) along the straight line passing through the centre of the image (15,15) and the same (i,j) point.
[dx, dy] = gradient(img);
for i=1:30
for j=1:30
pt = [dx(i, j), dy(i,j)];
line = [i-15, j-15];
costh = dot(line, pt)/(norm(line)*norm(pt));
par(i,j) = norm(costh*line);
tang(i,j) = norm(sin(acos(costh))*line);
end
end
is this code correct?
I think there is a conceptual error in your code, I tried to get your results with a different approach, see how it compares to yours.
[dy, dx] = gradient(img);
I inverted x and y because the usual convention in matlab is to have the first dimension along the rows of a matrix while gradient does the opposite.
I created an array of the same size as img but with each pixel containing the angle of the vector from the center of the image to this point:
[I,J] = ind2sub(size(img), 1:numel(img));
theta=reshape(atan2d(I-ceil(size(img,1)/2), J-ceil(size(img,2)/2)), size(img))+180;
The function atan2d ensures that the 4 quadrants give distinct angle values.
Now the projection of the x and y components can be obtained with trigonometry:
par=dx.*sind(theta)+dy.*cosd(theta);
tang=dx.*cosd(theta)+dy.*sind(theta);
Note the use of the .* to achieve point-by-point multiplication, this is a big advantage of Matlab's matrix computations which saves you a loop.
Here's an example with a well-defined input image (no gradient along the rows and a constant gradient along the columns):
img=repmat(1:30, [30 1]);
The results:
subplot(1,2,1)
imagesc(par)
subplot(1,2,2)
imagesc(tang)
colorbar

Assign specific RGB colours to 3d mesh/surface/points

Face and feature landmarks
I have a face image that has labelled face features. The image is stored in standard JPEG format and the landmarks are stored in [x y] format (x,y of point corresponds to its coordinates on the image as shown below)
Interpolated 3d face mesh
I have generated depth information (a 3d mesh) for each of the labelled points, and have a matrix in [x y z] format, where the coordinates x and y are the same as that of the points.
The sparse mesh looks like this:
I then interpolated over xrange, yrange and zrange to get a better mesh. Using mesh(xrange,yrange,zrange) gives me the following
The colours for face image pixels can be obtained using imread(face_image.jpg).
Given that the (x,y) value of each of the interpolated point corresponds to (x,y) in the image, is it possible to make the colour of the pixel at (x,y,z)[3dmesh] the same as colour of (x,y)[face image]?
This would effectively superimpose/warp the face on the3d mesh, giving me a 3d face model.
I would suggest this:
n=50000; % chose something appropriate
[C,map] = rgb2ind(FaceImageRGB,n);
To map the color in your RGB image into a linear index. Make sure the mesh and the RGB image have the same x-y dimensions.
Then use surf to plot the surface with the indexed values for color (should be in the form surf(X,Y,Z,C)) and the map as color map.
surf(3dmesh, C), shading flat;
colormap(map);
Edit: a working example (with a colorful image this time...):
rgbim=imread('http://upload.wikimedia.org/wikipedia/commons/0/0d/Loriculus_vernalis_-Ganeshgudi,_Karnataka,_India_-male-8-1c.jpg');
n=50000; % chose something apropriate
[C,map] = rgb2ind(rgbim,n);
% Creation of mesh with the same dimensions as the image:
[X,Y] = meshgrid(-floor(size(rgbim, 2)/2):floor(size(rgbim, 2)/2), -floor(size(rgbim, 1)/2):floor(size(rgbim, 1)/2));
% An arbitrary function for Z:
Z=-(X.^2+Y.^2);
% Display the surface with the image as color value:
surf(X, Y, Z, double(C)), shading flat
colormap(map);
Result:

is coordinate mapping same as pixel mapping in matlab for delaunay triangulation

I have to transform pixels from one image onto another image, by feature detection. I have calculated the projective transformation matrix. One image is the base image, and the other is a linearly translated image.
Now I have to define a larger grid and assign pixels from the base image to it. For example, if the base image is 20 at (1,1), on the larger grid I will have 20 at (1,1). and assign zeroes to all the unfilled values of the grid. Then I have to map the linearly translated image onto the base image and write my own algorithm based on "delaunay triangulation" to interpolate between the images.
My question is that when I map the translated image to the base image, I use the concept
(w,z)=inv(T).*(x,y)
A=inv(T).*B
where (w,z) are coordinates of the base image, (x,y) are coordinates of the translated image, A is a matrix containing coordinates (w z 1) and B is matrix containing coordinates (x y 1).
If I use the following code I get the new coordinates, but how do I relate these things to the image? Are my pixels from the second image also translated onto the first image? If not, how can I do this?
close all; clc; clear all;
image1_gray=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
figure; imshow(image1_gray); axis on; grid on;
title('Base image');
impixelinfo
hold on
image2_gray =imread('C:\Users\Javeria Farooq\Desktop\project images\j.pgm');
figure(2); imshow(image2_gray); axis on; grid on;
title('Unregistered image1');
impixelinfo
% Detect and extract features from both images
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 5, 'MetricThreshold', 500 );
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);
% Match feature vectors
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;
% Get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));
figure; showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
legend('matched points 1','matched points 2');
figure(5); showMatchedFeatures(image1_gray,image3_gray,matched_pts4,matched_pts3,'montage');
legend('matched points 1','matched points 3');
% Compute the transformation matrix using RANSAC
[tform, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts1, matched_pts2, 'projective')
figure(6); showMatchedFeatures(image1_gray,image2_gray,inlierPanoPoints,inlierFramePoints,'montage');
[m n] = size(image1_gray);
image1_gray = double(image1_gray);
[x1g,x2g]=meshgrid(m,n) % A MESH GRID OF 2X2
k=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
ind = sub2ind( size(k),x1g,x2g);
%[tform1, inlierFramepPoints, inlierPanopPoints, status] = estimateGeometricTransform(matched_pts4, matched_pts3, 'projective')
%figure(7); showMatchedFeatures(image1_gray,image3_gray,inlierPanopPoints,inlierFramepPoints,'montage');
%invtform=invert(tform)
%x=invtform
%[xq,yq]=meshgrid(1:0.5:200.5,1:0.5:200.5);
r=[];
A=[];
k=1;
%i didnot know how to refer to variable tform so i wrote the transformation
%matrix from variable structure tform
T=[0.99814272,-0.0024304502,-1.2932052e-05;2.8876773e-05,0.99930143,1.6285858e-06;0.029063907,67.809265,1]
%lets take i=1:400 so my r=2 and resulting grid is 400x400
for i=1:200
for j=1:200
A=[A; i j 1];
z=A*T;
r=[r;z(k,1)/z(k,3),z(k,2)/z(k,3)];
k=k+1;
end
end
%i have transformed the coordinates but how to assign values??
%r(i,j)=c(i,j)
d1=[];
d2=[];
for l=1:40000
d1=[d1;A(l,1)];
d2=[d2;r(l,1)];
X=[d1 d2];
X=X(:);
end
c1=[];
c2=[];
for l=1:40000
c1=[c1;A(l,2)];
c2=[c2;r(l,2)];
Y=[c1 c2];
Y=Y(:);
end
%this delaunay triangulation is of vertices as far as i understand it
%doesnot have any pixel value of any image
DT=delaunayTriangulation(X,Y);
triplot(DT,X,Y);
I solved this problem by using these two steps:
Use transformPointsForward command to transform the coordinates of image ,using the tform object returned by estimateGeometrcTransform
Use the scatteredInterpolant class in Matlab and use command scatteredInterpolant
to assign the transformed coordinates their respective pixel values.
F=scatteredInterpolant(P,z)
here P=nx2 matrix containing all the transformed coordinates
z=nx1 matrix containing pixel values of image that is transformed,it is obtained by converting image to column vector using image=image(:)
finally all the transformed coordinates are present along with their pixel values on the base image and can be interpolated.
You are doing way too much work here, and I don't think you need the Delaunay Triangulation at all. Use the imwarp function from the Image Processing Toolbox to transform the image. It takes the original image and the tform object returned by estimateGeometricTransform.

Resources