Face and feature landmarks
I have a face image that has labelled face features. The image is stored in standard JPEG format and the landmarks are stored in [x y] format (x,y of point corresponds to its coordinates on the image as shown below)
Interpolated 3d face mesh
I have generated depth information (a 3d mesh) for each of the labelled points, and have a matrix in [x y z] format, where the coordinates x and y are the same as that of the points.
The sparse mesh looks like this:
I then interpolated over xrange, yrange and zrange to get a better mesh. Using mesh(xrange,yrange,zrange) gives me the following
The colours for face image pixels can be obtained using imread(face_image.jpg).
Given that the (x,y) value of each of the interpolated point corresponds to (x,y) in the image, is it possible to make the colour of the pixel at (x,y,z)[3dmesh] the same as colour of (x,y)[face image]?
This would effectively superimpose/warp the face on the3d mesh, giving me a 3d face model.
I would suggest this:
n=50000; % chose something appropriate
[C,map] = rgb2ind(FaceImageRGB,n);
To map the color in your RGB image into a linear index. Make sure the mesh and the RGB image have the same x-y dimensions.
Then use surf to plot the surface with the indexed values for color (should be in the form surf(X,Y,Z,C)) and the map as color map.
surf(3dmesh, C), shading flat;
colormap(map);
Edit: a working example (with a colorful image this time...):
rgbim=imread('http://upload.wikimedia.org/wikipedia/commons/0/0d/Loriculus_vernalis_-Ganeshgudi,_Karnataka,_India_-male-8-1c.jpg');
n=50000; % chose something apropriate
[C,map] = rgb2ind(rgbim,n);
% Creation of mesh with the same dimensions as the image:
[X,Y] = meshgrid(-floor(size(rgbim, 2)/2):floor(size(rgbim, 2)/2), -floor(size(rgbim, 1)/2):floor(size(rgbim, 1)/2));
% An arbitrary function for Z:
Z=-(X.^2+Y.^2);
% Display the surface with the image as color value:
surf(X, Y, Z, double(C)), shading flat
colormap(map);
Result:
Related
I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection
I m trying to compute an efficient way to transform an image in cartesian coordinates into a polar representation. I know some functions such as ImToPolar are doing it and it works perfectly but takes a considerable much time for big images, especially when they require to be processed back and forth.
Here´s my input image:
and then I generate a polar mesh using a cartesian mesh centered at 0 and the function cart2pol(). Finally, I plot my image using mesh(theta, r, Input).
And here´s what I obtain:
Its exactly the image I need and it´s the same as ImToPolar or maybe better.
Since MATLAB knows how to compute it, does anybody know how to extract a matrix in polar representation from this output? Or maybe a fast (like in fast fourier transform) way to compute a Polar transform (and inverse) on MATLAB?
pol2cart and meshgrid and interp2 are sufficient to create the result:
I=imread('http://i.stack.imgur.com/HYSyb.png');
[r, c,~] = size(I);
%rgb image can be converted to indexed image to prevent excessive copmutation for each color
[idx, mp] = rgb2ind(I,32);
% add offset to image coordinates
x = (1:c)-(c/2);
y = (1:r)-(r/2);
% create distination coordinates in polar form so value of image can be interpolated in those coordinates
% angle ranges from 0 to 2 * pi and radius assumed that ranges from 0 to 400
% linspace(0,2*pi, 200) leads to a stretched image try it!
[xp yp] = meshgrid(linspace(0,2*pi), linspace(0,400));
%translate coordinate from polar to image coordinates
[xx , yy] = pol2cart(xp,yp);
% interpolate pixel values for unknwon coordinates
out = interp2(x, y, idx, xx, yy);
% save the result to a file
imwrite(out, mp, 'result.png')
I'd like to take in an RGB image, find the points in the image that are white, and get the cartesian coordinates of those points in the image. I've gotten most of the way there, but when I try to plot the cartesian coordinates, I get a vertically tiled image (i.e. 5 overlapped copies of what I should see). Anyone know what could be causing this?
,
Code: (JPG comes in as 2448 x x3264 x 3 uint8)
I = imread('IMG_0245.JPG');
imshow(I); % display unaltered image
% Convert image to grayscale
I = rgb2gray(I);
% Convert image to binary (black/white)
I = im2bw(I, 0.9);
% Generate cartesian coordinates of image
imageSize = size(I);
[x, y] = meshgrid( 1:imageSize(1), 1:imageSize(2) );
PerspectiveImage = [x(:), y(:), I(:)];
% Get indices of white points only
whiteIndices = find(PerspectiveImage(:,3));
figure; plot( PerspectiveImage(whiteIndices, 1), PerspectiveImage(whiteIndices, 2),'.');
% Flip vertically to correct indexing vs. plotting issue
axis ij
Very simple. You're declaring your meshgrid wrong. It should be:
[x, y] = meshgrid( 1:imageSize(2), 1:imageSize(1) );
The first parameter denotes the horizontal extents of the 2D grid, and so you want to make this vary for as many columns as you have. Similarly, the second parameter denotes the vertical extents of the 2D grid, and so you want to make this for as many rows as you have.
I had to pre-process some of your image to get some good results because your original image had a large white border surrounding the image. I had to remove this border by removing all pure white pixels. I also read in the image directly from StackOverflow:
I = imread('http://s7.postimg.org/ovb53w4ff/Track_example.jpg');
mask = all(I == 255, 3);
I = bsxfun(#times, I, uint8(~mask));
This was the image I get after doing my pre-processing:
Once I do this and change your meshgrid call, I get this:
I have an image in YUV 420 semi-planar format where the bytes are stored in such manner:
[Y1 Y2 ... [U1 V1....
Yk Yk+1...] Uk' Uk'+1]
where size of the Y plane is twice that of UV plane and for every 2*2 matrix of Y values, there is 1 U and 1 V value.
I want to apply a homography matrix to this image without converting it to RGB. So, it's easy to do this for Y plane as Y values have a one-to-one mapping with the x-y pixel coordinate of image but how to do this for the UV plane as UV values don't have a direct mapping with the x-y pixel coordinates?
I did it by filling the UV-plane only for even x-y coordinates.
First, I found out the Y-coordinate in the original image for that pixel location by applying inverse homography. Then, I found the corresponding UV-coordinate for that Y-coordinate and used the value at that location to fill the UV-plane of my new image.
I have to transform pixels from one image onto another image, by feature detection. I have calculated the projective transformation matrix. One image is the base image, and the other is a linearly translated image.
Now I have to define a larger grid and assign pixels from the base image to it. For example, if the base image is 20 at (1,1), on the larger grid I will have 20 at (1,1). and assign zeroes to all the unfilled values of the grid. Then I have to map the linearly translated image onto the base image and write my own algorithm based on "delaunay triangulation" to interpolate between the images.
My question is that when I map the translated image to the base image, I use the concept
(w,z)=inv(T).*(x,y)
A=inv(T).*B
where (w,z) are coordinates of the base image, (x,y) are coordinates of the translated image, A is a matrix containing coordinates (w z 1) and B is matrix containing coordinates (x y 1).
If I use the following code I get the new coordinates, but how do I relate these things to the image? Are my pixels from the second image also translated onto the first image? If not, how can I do this?
close all; clc; clear all;
image1_gray=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
figure; imshow(image1_gray); axis on; grid on;
title('Base image');
impixelinfo
hold on
image2_gray =imread('C:\Users\Javeria Farooq\Desktop\project images\j.pgm');
figure(2); imshow(image2_gray); axis on; grid on;
title('Unregistered image1');
impixelinfo
% Detect and extract features from both images
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 5, 'MetricThreshold', 500 );
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);
% Match feature vectors
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;
% Get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));
figure; showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
legend('matched points 1','matched points 2');
figure(5); showMatchedFeatures(image1_gray,image3_gray,matched_pts4,matched_pts3,'montage');
legend('matched points 1','matched points 3');
% Compute the transformation matrix using RANSAC
[tform, inlierFramePoints, inlierPanoPoints, status] = estimateGeometricTransform(matched_pts1, matched_pts2, 'projective')
figure(6); showMatchedFeatures(image1_gray,image2_gray,inlierPanoPoints,inlierFramePoints,'montage');
[m n] = size(image1_gray);
image1_gray = double(image1_gray);
[x1g,x2g]=meshgrid(m,n) % A MESH GRID OF 2X2
k=imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
ind = sub2ind( size(k),x1g,x2g);
%[tform1, inlierFramepPoints, inlierPanopPoints, status] = estimateGeometricTransform(matched_pts4, matched_pts3, 'projective')
%figure(7); showMatchedFeatures(image1_gray,image3_gray,inlierPanopPoints,inlierFramepPoints,'montage');
%invtform=invert(tform)
%x=invtform
%[xq,yq]=meshgrid(1:0.5:200.5,1:0.5:200.5);
r=[];
A=[];
k=1;
%i didnot know how to refer to variable tform so i wrote the transformation
%matrix from variable structure tform
T=[0.99814272,-0.0024304502,-1.2932052e-05;2.8876773e-05,0.99930143,1.6285858e-06;0.029063907,67.809265,1]
%lets take i=1:400 so my r=2 and resulting grid is 400x400
for i=1:200
for j=1:200
A=[A; i j 1];
z=A*T;
r=[r;z(k,1)/z(k,3),z(k,2)/z(k,3)];
k=k+1;
end
end
%i have transformed the coordinates but how to assign values??
%r(i,j)=c(i,j)
d1=[];
d2=[];
for l=1:40000
d1=[d1;A(l,1)];
d2=[d2;r(l,1)];
X=[d1 d2];
X=X(:);
end
c1=[];
c2=[];
for l=1:40000
c1=[c1;A(l,2)];
c2=[c2;r(l,2)];
Y=[c1 c2];
Y=Y(:);
end
%this delaunay triangulation is of vertices as far as i understand it
%doesnot have any pixel value of any image
DT=delaunayTriangulation(X,Y);
triplot(DT,X,Y);
I solved this problem by using these two steps:
Use transformPointsForward command to transform the coordinates of image ,using the tform object returned by estimateGeometrcTransform
Use the scatteredInterpolant class in Matlab and use command scatteredInterpolant
to assign the transformed coordinates their respective pixel values.
F=scatteredInterpolant(P,z)
here P=nx2 matrix containing all the transformed coordinates
z=nx1 matrix containing pixel values of image that is transformed,it is obtained by converting image to column vector using image=image(:)
finally all the transformed coordinates are present along with their pixel values on the base image and can be interpolated.
You are doing way too much work here, and I don't think you need the Delaunay Triangulation at all. Use the imwarp function from the Image Processing Toolbox to transform the image. It takes the original image and the tform object returned by estimateGeometricTransform.