I am trying to find keypoints in a rotated and then subsampled image using fastcorners. My code:
tfm = Translation((r/2)-1,(c/2)-1) ∘ LinearMap(RotMatrix(-theta)) ∘ Translation(-((r/2)-1),-((c/2)-1))
uR = warp(img1, inv(tfm), indices(img1))
uT = subSample(uR, axes(uF)[1][1], axes(uF)[1][end], t)
kpts = Keypoints(fastcorners(uT, 12, 0.5))
subsampling in vertical direction. So resultant image is no longer a
rectangle but a parallelogram
Now I want to remove keypoints at the boundary of the rotated and subsampled image(i.e. something like d distance from the boundary of the distorted image, parallelogram).
Can you guys suggest something how I can proceed? kpts is storing the cartesian coordinates of the keypoints in the distorted image.
Thanks!
Find the distance to each of the corners of the distorted image from the coordinates of the feature point using the formula mentioned here. Then find the coordinates of the keypoint in the original image by upsampling and then performing an inverse rotation.
For each of the keypoints, transform them with the inverse transformation (including the inverse subsampling) and then check if they are on the boundary of the original image.
Related
I have an image rotated with imrotate as follow:
Im_requete=imread('lena.jpg');
Im_requete_G=rgb2gray(Im_requete);
Im_requete_G_scale_rot = imresize(imrotate(Im_requete_G,-20), 1.2);
I'm trying to get the coordinates (x, y) of the four corners of the rotated image as illustrated in the image below (red circle represents the desired corner):
This is my code:
stat = regionprops(Im_requete_G_scale_rot,'Extrema'); %extrema detection of the image.
point = stat.Extrema;
hold on
figure,imshow(Im_requete_G_scale_rot)
hold on
for i = 2:2:length(point)
x = point(i,1);
y = point(i,2);
plot(x,y,'o');
text(x,y,num2str(i),'color','r')
end
But the resulting coordinates are somewhere along the edges and not where I wanted them to be, as illustrated in the second image:
Can someone please tell me what's wrong with this code?
I don't have a good explanation for this, but I suppose regionprops gets confused by the grayscale tones in the image. If we turn the rotated Lena into a logical array, your algorithm works properly:
Im_requete_G_scale_rot = logical(imresize(imrotate(Im_requete_G,-20), 1.2)); % 3rd line
I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection
I need to know how to align an image in Matlab for further work.
for example I have the next license plate image and I want to recognize all
the digits.
my program works for straight images so, I need to align the image and then
preform the optical recognition system.
The method should be as much as universal that fits for all kinds of plates and in all kinds of angles.
EDIT: I tried to do this with Hough Transform but I didn't Succeed. anybody can help me do to this?
any help will be greatly appreciated.
The solution was first hinted at by #AruniRC in the comments, then implemented by #belisarius in Mathematica. The following is my interpretation in MATLAB.
The idea is basically the same: detect edges using Canny method, find prominent lines using Hough Transform, compute line angles, finally perform a Shearing Transform to align the image.
%# read and crop image
I = imread('http://i.stack.imgur.com/CJHaA.png');
I = I(:,1:end-3,:); %# remove small white band on the side
%# egde detection
BW = edge(rgb2gray(I), 'canny');
%# hough transform
[H T R] = hough(BW);
P = houghpeaks(H, 4, 'threshold',ceil(0.75*max(H(:))));
lines = houghlines(BW, T, R, P);
%# shearing transforma
slopes = vertcat(lines.point2) - vertcat(lines.point1);
slopes = slopes(:,2) ./ slopes(:,1);
TFORM = maketform('affine', [1 -slopes(1) 0 ; 0 1 0 ; 0 0 1]);
II = imtransform(I, TFORM);
Now lets see the results
%# show edges
figure, imshow(BW)
%# show accumlation matrix and peaks
figure, imshow(imadjust(mat2gray(H)), [], 'XData',T, 'YData',R, 'InitialMagnification','fit')
xlabel('\theta (degrees)'), ylabel('\rho'), colormap(hot), colorbar
hold on, plot(T(P(:,2)), R(P(:,1)), 'gs', 'LineWidth',2), hold off
axis on, axis normal
%# show image with lines overlayed, and the aligned/rotated image
figure
subplot(121), imshow(I), hold on
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1), xy(:,2), 'g.-', 'LineWidth',2);
end, hold off
subplot(122), imshow(II)
In Mathematica, using Edge Detection and Hough Transform:
If you are using some kind of machine learning toolbox for text recognition, try to learn from ALL plates - not only aligned ones. Recognition results should be equally well if you transform the plate or dont, since by transforming, no new informations according to the true number will enhance the image.
If all the images have a dark background like that one, you could binarize the image, fit lines to the top or bottom of the bright area and calculate an affine projection matrix from the line gradient.
Face and feature landmarks
I have a face image that has labelled face features. The image is stored in standard JPEG format and the landmarks are stored in [x y] format (x,y of point corresponds to its coordinates on the image as shown below)
Interpolated 3d face mesh
I have generated depth information (a 3d mesh) for each of the labelled points, and have a matrix in [x y z] format, where the coordinates x and y are the same as that of the points.
The sparse mesh looks like this:
I then interpolated over xrange, yrange and zrange to get a better mesh. Using mesh(xrange,yrange,zrange) gives me the following
The colours for face image pixels can be obtained using imread(face_image.jpg).
Given that the (x,y) value of each of the interpolated point corresponds to (x,y) in the image, is it possible to make the colour of the pixel at (x,y,z)[3dmesh] the same as colour of (x,y)[face image]?
This would effectively superimpose/warp the face on the3d mesh, giving me a 3d face model.
I would suggest this:
n=50000; % chose something appropriate
[C,map] = rgb2ind(FaceImageRGB,n);
To map the color in your RGB image into a linear index. Make sure the mesh and the RGB image have the same x-y dimensions.
Then use surf to plot the surface with the indexed values for color (should be in the form surf(X,Y,Z,C)) and the map as color map.
surf(3dmesh, C), shading flat;
colormap(map);
Edit: a working example (with a colorful image this time...):
rgbim=imread('http://upload.wikimedia.org/wikipedia/commons/0/0d/Loriculus_vernalis_-Ganeshgudi,_Karnataka,_India_-male-8-1c.jpg');
n=50000; % chose something apropriate
[C,map] = rgb2ind(rgbim,n);
% Creation of mesh with the same dimensions as the image:
[X,Y] = meshgrid(-floor(size(rgbim, 2)/2):floor(size(rgbim, 2)/2), -floor(size(rgbim, 1)/2):floor(size(rgbim, 1)/2));
% An arbitrary function for Z:
Z=-(X.^2+Y.^2);
% Display the surface with the image as color value:
surf(X, Y, Z, double(C)), shading flat
colormap(map);
Result:
I have a plain bitmap and I want to do a projection on a cylinder.
That means, I want to transform the image in a way so that if I print it and wrap around a columnar cylinder and photograph it from a certain position, the resulting image looks like the original.
Still I'm quite lost in all the projection algorithms (that are often related to earth projections).
So I'd be thankful for hints what the correct algorithm could be and which tools I could use to apply it to my image.
Let say you have a rectangle image of lenght: L and height: H .
and a cylinder of radius : R and height H'
Let A (x,z) be a point in the picture,
Then A' (x',y',z') = ( R*cos(x*(2Pi/L)) , R*sin(x*(2Pi/L)) , z*(H'/H)) will be the projection of your point A on your cylinder.
Proof :
1. z' = z*(H'/H)
I first fit the cylinder to the image size , that's why I multiply by
: (H'/H), and I keep the same z axis. (if you draw it you will see it
immediatly)
2. x' and y ' ?
I project each line of my image into a circle . the parametric
equation of a circle is (Rcos(t), Rsin(t)) for t in [0,2PI], the
parametric equation map a segment (t in [0,2PI]) to a circle . That's
exactly what we are trying to do.
then if x describes a line of lenght L, x*(2pi)/L describres a line of
length 2pi and I can use the parametric equation to map each point of
this line to a circle.
Hope it helps
The previous function gave the function to "press" a plane against a cylinder.
This is a bijection, so from a given point in the cylinder you can easily get the original image.
A(x,y,z) from the cylinder
A'(x',z') in the image :
z' = z*(H/H')
and x' = L/(2Pi)* { arccos(x/R) *(sign(y)) (mod(2Pi)) }
(it's a pretty ugly formula but that's it :D and you need to express the modulo as a positive value)
If you can apply that to your cylindrical image you get how to uncoil your picture.