I have a plain bitmap and I want to do a projection on a cylinder.
That means, I want to transform the image in a way so that if I print it and wrap around a columnar cylinder and photograph it from a certain position, the resulting image looks like the original.
Still I'm quite lost in all the projection algorithms (that are often related to earth projections).
So I'd be thankful for hints what the correct algorithm could be and which tools I could use to apply it to my image.
Let say you have a rectangle image of lenght: L and height: H .
and a cylinder of radius : R and height H'
Let A (x,z) be a point in the picture,
Then A' (x',y',z') = ( R*cos(x*(2Pi/L)) , R*sin(x*(2Pi/L)) , z*(H'/H)) will be the projection of your point A on your cylinder.
Proof :
1. z' = z*(H'/H)
I first fit the cylinder to the image size , that's why I multiply by
: (H'/H), and I keep the same z axis. (if you draw it you will see it
immediatly)
2. x' and y ' ?
I project each line of my image into a circle . the parametric
equation of a circle is (Rcos(t), Rsin(t)) for t in [0,2PI], the
parametric equation map a segment (t in [0,2PI]) to a circle . That's
exactly what we are trying to do.
then if x describes a line of lenght L, x*(2pi)/L describres a line of
length 2pi and I can use the parametric equation to map each point of
this line to a circle.
Hope it helps
The previous function gave the function to "press" a plane against a cylinder.
This is a bijection, so from a given point in the cylinder you can easily get the original image.
A(x,y,z) from the cylinder
A'(x',z') in the image :
z' = z*(H/H')
and x' = L/(2Pi)* { arccos(x/R) *(sign(y)) (mod(2Pi)) }
(it's a pretty ugly formula but that's it :D and you need to express the modulo as a positive value)
If you can apply that to your cylindrical image you get how to uncoil your picture.
Related
A similar question was asked before, unfortunately I cannot comment Samgaks answer so I open up a new post with this one. Here is the link to the old question:
How to calculate ray in real-world coordinate system from image using projection matrix?
My goal is to map from image coordinates to world coordinates. In fact I am trying to do this with the Camera Intrinsics Parameters of the HoloLens Camera.
Of course this mapping will only give me a ray connecting the Camera Optical Centre and all points, which can lie on that ray. For the mapping from image coordinates to world coordinates we can use the inverse camera matrix which is:
K^-1 = [1/fx 0 -cx/fx; 0 1/fy -cy/fy; 0 0 1]
Pcam = K^-1 * Ppix;
Pcam_x = P_pix_x/fx - cx/fx;
Pcam_y = P_pix_y/fy - cy/fy;
Pcam_z = 1
Orientation of Camera Coordinate System and Image Plane
In this specific case the image plane is probably at Z = -1 (However, I am a bit uncertain about this). The Section Pixel to Application-specified Coordinate System on page HoloLens CameraProjectionTransform describes how to go form pixel coordinates to world coordinates. To what I understand two signs in the K^-1 are flipped s.t. we calculate the coordinates as follows:
Pcam_x = (Ppix_x/fx) - (cx*(-1)/fx) = P_pix_x/fx + cx/fx;
Pcam_y = (Ppix_y/fy) - (cy*(-1)/fy) = P_pix_y/fy + cy/fy;
Pcam_z = -1
Pcam = (Pcam_x, Pcam_y, -1)
CameraOpticalCentre = (0,0,0)
Ray = Pcam - CameraOpticalCentre
I do not understand how to create the Camera Intrinsics for the case of the image plane being at a negative Z-coordinate. And I would like to have a mathematical explanation or intuitive understanding of why we have the sign flip (P_pix_x/fx + cx/fx instead of P_pix_x/fx - cx/fx).
Edit: I read in another post that the thirst column of the camera matrix has to be negated for the case that the camera is facing down the negative z-direction. This would explain the sign flip. However, why do we need to change the sign of the third column. I would like to have a intuitive understanding of this.
Here the link to the post Negation of third column
Thanks a lot in advance,
Lisa
why do we need to change the sign of the third column
To understand why we need to negate the third column of K (i.e. negate the principal points of the intrinsic matrix) let's first understand how to get the pixel coordinates of a 3D point already in the camera coordinates frame. After that, it is easier to understand why -z requires negating things.
let's imagine a Camera c, and one point B in the space (w.r.t. the camera coordinate frame), let's put the camera sensor (i.e. image) at E' as in the image below. Therefore f (in red) will be the focal length and ? (in blue) will be the x coordinate in pixels of B (from the center of the image). To simplify things let's place B at the corner of the field of view (i.e. in the corner of the image)
We need to calculate the coordinates of B projected into the sensor d (which is the same as the 2d image). Because the triangles AEB and AE'B' are similar triangles then ?/f = X/Z therefore ? = X*f/Z. X*f is the first operation of the K matrix is. We can multiply K*B (with B as a column vector) to check.
This will give us coordinates in pixels w.r.t. the center of the image. Let's imagine the image is size 480x480. Therefore B' will look like this in the image below. Keep in mind that in image coordinates, the y-axis increases going down and the x-axis increases going right.
In images, the pixel at coordinates 0,0 is in the top left corner, therefore we need to add half of the width of the image to the point we have. then px = X*f/Z + cx. Where cx is the principal point in the x-axis, usually W/2. px = X*f/Z + cx is exactly as doing K * B / Z. So X*f/Z was -240, if we add cx (W/2 = 480/2 = 240) and therefore X*f/Z + cx = 0, same with the Y. The final pixel coordinates in the image are 0,0 (i.e. top left corner)
Now in the case where we use z as negative, when we divide X and Y by Z, because Z is negative, it will change the sign of X and Y, therefore it will be projected to B'' at the opposite quadrant as in the image below.
Now the second image will instead be:
Because of this, instead of adding the principal point, we need to subtract it. That is the same as negating the last column of K.
So we have 240 - 240 = 0 (where the second 240 is the principal point in x, cx) and the same for Y. The pixel coordinates are 0,0 as in the example when z was positive. If we do not negate the last column we will end up with 480,480 instead of 0,0.
Hope this helped a little bit
I m trying to compute an efficient way to transform an image in cartesian coordinates into a polar representation. I know some functions such as ImToPolar are doing it and it works perfectly but takes a considerable much time for big images, especially when they require to be processed back and forth.
Here´s my input image:
and then I generate a polar mesh using a cartesian mesh centered at 0 and the function cart2pol(). Finally, I plot my image using mesh(theta, r, Input).
And here´s what I obtain:
Its exactly the image I need and it´s the same as ImToPolar or maybe better.
Since MATLAB knows how to compute it, does anybody know how to extract a matrix in polar representation from this output? Or maybe a fast (like in fast fourier transform) way to compute a Polar transform (and inverse) on MATLAB?
pol2cart and meshgrid and interp2 are sufficient to create the result:
I=imread('http://i.stack.imgur.com/HYSyb.png');
[r, c,~] = size(I);
%rgb image can be converted to indexed image to prevent excessive copmutation for each color
[idx, mp] = rgb2ind(I,32);
% add offset to image coordinates
x = (1:c)-(c/2);
y = (1:r)-(r/2);
% create distination coordinates in polar form so value of image can be interpolated in those coordinates
% angle ranges from 0 to 2 * pi and radius assumed that ranges from 0 to 400
% linspace(0,2*pi, 200) leads to a stretched image try it!
[xp yp] = meshgrid(linspace(0,2*pi), linspace(0,400));
%translate coordinate from polar to image coordinates
[xx , yy] = pol2cart(xp,yp);
% interpolate pixel values for unknwon coordinates
out = interp2(x, y, idx, xx, yy);
% save the result to a file
imwrite(out, mp, 'result.png')
I would like to fit a MR binary data of 281*398*104 matrix which is not a perfect sphere, and find out the center and radius of sphere and error also. I know LMS or SVD is a good choice to fit for sphere.
I have tried sphereFit from matlab file exchange but got an error,
>> sphereFit(data)
Warning: Matrix is singular to working precision.
> In sphereFit at 33
ans =
NaN NaN NaN
Would you let me know where is the problem, or any others solution?
If you want to use sphere fitting algorithm you should first extract the boundary points of the object you assume to be a sphere. The result should be represented by a N-by-3 array containing coordinates of the points. Then you can apply sphereFit function.
In order to obtain boundary point of a binary object, there are several methods. One method is to apply morphological erosion (you need the "imerode" function from the image processing toolbox) with small structuring element, then compute set difference between the two images, and finally use the "find" function to transform binary image into a coordinate array.
the idea is as follow:
dataIn = imerode(data, ones([3 3 3]));
bnd = data & ~data2;
inds = find(bnd);
[y, x, z] = ind2sub(size(data), inds); % be careful about x y order
points = [x y z];
sphere = sphereFitting(points);
By the way, the link you gave refers to circle fitting, I suppose you wanted to point to a sphere fitting submission?
regards,
For an ellipsoid of the form
with orientation vector and centre at point , how to find whether a point is inside the ellipsoid or not?
An additional note that the geometry actually is with a=b (spheroid) and therefore one axis is sufficient to define orientation
Note: I see a similar question asked in the forum. But, it is about an ellipsoid at origin and without any arbitrary orientation and here both arbitrary position and orientation are considered.
Find affine transform M that translates this ellipse in axis-oriented one (translation by -p and rotation to align orientation vector r and proper coordinate axis).
Then apply this transform to point p and check that p' lies inside axis-oriented ellipsoid, i.e.
x^2/a^2+ y^2/b^2+z^2/c^2 <= 1
Create a coordinate system E with the center at p and with the long axis of the ellipse aligned with r. Create a matrix that can transform global coordinates to the coordinate system E. Then put the transformed coordinates into the ellipse equation.
A center point p and an "orientation vector" r do not suffice to completely specify the position of the ellipsoid, there is one degree of freedom left. Your problem is indeterminate.
If your vector r is a unit vector from the origin to the pole, then the test for whether a point q is in (or on) the ellipse is:
v = q-p; // 3d vector difference
dot = v.r; // 3d dot product
f = dot*dot;
g = v.v - f; // 3d dot product and scalar subtraction
return f/(b*b) + g/(a*a) <= 1
Note that if the ellipse was aligned so that r was the z unit vector, then the test above translates into the usual test for inclusion of a point in an ellipse.
I found a code about image rotation in frequency domain. But I couldn't understand this code. This code works correct. Can anyone describe this code? Actually I have to write a code to rotate an image in frequency domain in polar coordinates. Do you think this code
meet the requirements.
clear;
img=imread('cameraman.tif');
imshow(img); title('original image');
theta=26,5;
N=size(img,1);
M=size(img,2);
fimg=fftshift(fft2(fftshift(img)));
p=ones(N,1)*[-N/2:(N-1)/2]; % horizontal axis
q=-p'; % vertical axis
theta=2*pi*theta/360;
g=1/(N^2).*fimg;
z1=exp(i*pi/N.*((p.^2-q.^2)*cos(theta)-2*p.*q*sin(theta)));
z2=exp(-i*pi/N.*((p.^2-q.^2)*cos(theta)-2*p.*q*sin(theta)));
k=ifft2(fft2(g.*z1).*(fft2(z2)));
figure,
imshow(abs(fftshift(flipud(k))), [0 255]);
title(['Cameraman rotated at ' num2str(theta*360/(2*pi)) ' Degrees']); axis off
As you can see here.
For the rotation, there is no properties. But this code using the shift properties to apply the rotation.
In z1, you have the coordinate (p,q) compute normaly as well, but just by applying the 2D rotation matrix to your coordinate, you can use the shift properties.
And notice, this code change a the sign every where, that why there no minus in z1 instead of z2.
Rotation : [cos(theta) -sin(theta);
sin(theta) cos(theta)];