So I'm trying to transform an image coordinate which is the ordinary square x,y coordinate to a circular coordinate as shown below.
In order to do so, the center of the square image must be the origin which is 0 in the circular coordinate system.
In Matlab they have a function called 'cart2pol' where:
cart2pol(x,y)
However, the x,y argument are the circular coordinates hence before using cart2pol, how do i convert the ordinary square coordinate system to a circular one?
I think you should be able to use cart2pol(x,y), which gives you the polar (2d) or cylinder (3d) coordinates for some cartesian inputs x and y (and z for cylindrical).
Coordinates in your 1st image: i, j. Coordinates in your 2nd image: theta, rho.
N = 400; % example: 400x400 pixels
% shift origin into center
% Matlab uses 1 to N indexing not 0 to N-1
xo = (N)/2; % center of image
yo = (N)/2;
% Define some sample points in the image (coords from top-left of image)
% 0 deg, 90 deg, 180 deg, 270 deg
i = [350 200 50 200];
j = [200 1 200 350];
% get polar coordinates from cartesian ones (theta in radians)
% -(j-yo) due to opposite direction of j to mathematical positive direction of coord.system
[theta, rho] = cart2pol(i-xo, -(j-yo));
rho = rho/(N/2); % scaling for unit circle
theta is in the range -pi to pi, so if you need 0 to 2pi or 0 to 360 you still need to do a mapping.
Related
A similar question was asked before, unfortunately I cannot comment Samgaks answer so I open up a new post with this one. Here is the link to the old question:
How to calculate ray in real-world coordinate system from image using projection matrix?
My goal is to map from image coordinates to world coordinates. In fact I am trying to do this with the Camera Intrinsics Parameters of the HoloLens Camera.
Of course this mapping will only give me a ray connecting the Camera Optical Centre and all points, which can lie on that ray. For the mapping from image coordinates to world coordinates we can use the inverse camera matrix which is:
K^-1 = [1/fx 0 -cx/fx; 0 1/fy -cy/fy; 0 0 1]
Pcam = K^-1 * Ppix;
Pcam_x = P_pix_x/fx - cx/fx;
Pcam_y = P_pix_y/fy - cy/fy;
Pcam_z = 1
Orientation of Camera Coordinate System and Image Plane
In this specific case the image plane is probably at Z = -1 (However, I am a bit uncertain about this). The Section Pixel to Application-specified Coordinate System on page HoloLens CameraProjectionTransform describes how to go form pixel coordinates to world coordinates. To what I understand two signs in the K^-1 are flipped s.t. we calculate the coordinates as follows:
Pcam_x = (Ppix_x/fx) - (cx*(-1)/fx) = P_pix_x/fx + cx/fx;
Pcam_y = (Ppix_y/fy) - (cy*(-1)/fy) = P_pix_y/fy + cy/fy;
Pcam_z = -1
Pcam = (Pcam_x, Pcam_y, -1)
CameraOpticalCentre = (0,0,0)
Ray = Pcam - CameraOpticalCentre
I do not understand how to create the Camera Intrinsics for the case of the image plane being at a negative Z-coordinate. And I would like to have a mathematical explanation or intuitive understanding of why we have the sign flip (P_pix_x/fx + cx/fx instead of P_pix_x/fx - cx/fx).
Edit: I read in another post that the thirst column of the camera matrix has to be negated for the case that the camera is facing down the negative z-direction. This would explain the sign flip. However, why do we need to change the sign of the third column. I would like to have a intuitive understanding of this.
Here the link to the post Negation of third column
Thanks a lot in advance,
Lisa
why do we need to change the sign of the third column
To understand why we need to negate the third column of K (i.e. negate the principal points of the intrinsic matrix) let's first understand how to get the pixel coordinates of a 3D point already in the camera coordinates frame. After that, it is easier to understand why -z requires negating things.
let's imagine a Camera c, and one point B in the space (w.r.t. the camera coordinate frame), let's put the camera sensor (i.e. image) at E' as in the image below. Therefore f (in red) will be the focal length and ? (in blue) will be the x coordinate in pixels of B (from the center of the image). To simplify things let's place B at the corner of the field of view (i.e. in the corner of the image)
We need to calculate the coordinates of B projected into the sensor d (which is the same as the 2d image). Because the triangles AEB and AE'B' are similar triangles then ?/f = X/Z therefore ? = X*f/Z. X*f is the first operation of the K matrix is. We can multiply K*B (with B as a column vector) to check.
This will give us coordinates in pixels w.r.t. the center of the image. Let's imagine the image is size 480x480. Therefore B' will look like this in the image below. Keep in mind that in image coordinates, the y-axis increases going down and the x-axis increases going right.
In images, the pixel at coordinates 0,0 is in the top left corner, therefore we need to add half of the width of the image to the point we have. then px = X*f/Z + cx. Where cx is the principal point in the x-axis, usually W/2. px = X*f/Z + cx is exactly as doing K * B / Z. So X*f/Z was -240, if we add cx (W/2 = 480/2 = 240) and therefore X*f/Z + cx = 0, same with the Y. The final pixel coordinates in the image are 0,0 (i.e. top left corner)
Now in the case where we use z as negative, when we divide X and Y by Z, because Z is negative, it will change the sign of X and Y, therefore it will be projected to B'' at the opposite quadrant as in the image below.
Now the second image will instead be:
Because of this, instead of adding the principal point, we need to subtract it. That is the same as negating the last column of K.
So we have 240 - 240 = 0 (where the second 240 is the principal point in x, cx) and the same for Y. The pixel coordinates are 0,0 as in the example when z was positive. If we do not negate the last column we will end up with 480,480 instead of 0,0.
Hope this helped a little bit
I need to find pixels laying on a circle centered on point (0,0). Right now I do this by using formulas:
x = round(r * cos(angle))
y = round(r * sin(angle))
and my angle takes values from 0 to 2 Pi.
However this is not producing accurate results. For example, it gives point (1,1) for diameter 1 and also for diameter 2. How to avoid it?
What we have is angle(0-360), our small object is in the center and we have width and height of each object. I have tried to do dividing widths and heights without success. Task is to place small object in the bottom of the big object, and when big object get rotated we need small object be on the bottom again, but that bottom could be up or left or right, that why i guessed we need the angle.
So basically we need to create circle movement of small object. Radius of that circle will be big object height/2. But how to calculate X and Y locations from the center to place the small object?
Represantation in images:
Here we have the default state with angle 0
Here we have angle 47
And here we have angle 227
Lets say you want to calculate your new coordinates r pixels away from the point (X,Y) for an angle a. And if your new coordinates would be (x1, y1),
x1 = X + r * COS(a)
y1 = Y + r * SIN(a)
Here is some more info and techniques
I have some damaged line segments in a binary image and I need to fix them (make them straight and at their original thick). In order to do that I have to find the middle points of the segment, so when I check the neighborhood to find the thickness of the lines I'll be able to find where the pixel stops being 1 and becomes 0.
Assuming your damaged line segments are straight, you can use regionprops in MATLAB to find the center of each bounding box. Because if a segment is straight, its is always the diagonal line of the bounding box, thus the center of the box is also the center of the semgent.
Let's call the points A and B to reduce ambiguity, A(Xa, Ya) and B(Xb, Yb)
Let C be the middle point.
C(Xc, Yc)
Xc = (Xa + Xb) / 2
Yc = (Ya + Yb) / 2
We have four interesting numbers, two for the X coordinates and two for the Y coordinates.
Xmin = floor(Xc)
Xmax = ceil(Xc)
Ymin = floor(Yc)
Ymax = ceil(Yc)
The X coordinate of your middle point is either Xmin or Xmax, the Y coordinate of your middle point is either Ymin or Ymax.
So we have four potential points: (Xmin, Ymin), (Xmin, Ymax), (Xmax, Ymin), (Xmax, Ymax).
So, finally, we must decide which point is nearest to C.
Distance from P(Xp, Yp) to C(Xc, Yc) is:
sqrt(sqr(Xp - Xc) + sqr(Yp - Yc))
Calculate the four distance from the four points to C, choose the minimum and that will be the best possible middle point.
Suppose
A = [xa ya];
B = [xb yb];
then
C = round( mean([A;B]) );
Matlab's round rounds numbers towards their nearest integer, so this minimizes the (city-block) distance from the analytical center (mean([A;B])) to the nearest pixel.
If you want to keep sub-pixel precision (which is actually advisable for most calculations until an explicit map from a result to pixel indices is required), just drop the round and use only the mean part.
I have the need to determine the bounding rectangle for a polygon at an arbitrary angle. This picture illustrates what I need to do:
alt text http://kevlar.net/RotatedBoundingRectangle.png
The pink rectangle is what I need to determine at various angles for simple 2d polygons.
Any solutions are much appreciated!
Edit:
Thanks for the answers, I got it working once I got the center points correct. You guys are awesome!
To get a bounding box with a certain angle, rotate the polygon the other way round by that angle. Then you can use the min/max x/y coordinates to get a simple bounding box and rotate that by the angle to get your final result.
From your comment it seems you have problems with getting the center point of the polygon. The center of a polygon should be the average of the coordinate sums of each point. So for points P1,...,PN, calculate:
xsum = p1.x + ... + pn.x;
ysum = p1.y + ... + pn.y;
xcenter = xsum / n;
ycenter = ysum / n;
To make this complete, I also add some formulas for the rotation involved. To rotate a point (x,y) around a center point (cx, cy), do the following:
// Translate center to (0,0)
xt = x - cx;
yt = y - cy;
// Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * cos(alpha) - yt * sin(alpha);
yr = xt * sin(alpha) + yt * cos(alpha);
// Translate back to (cx, cy)
result.x = xr + cx;
result.y = yr + cx;
To get the smallest rectangle you should get the right angle. This can acomplished by an algorithm used in collision detection: oriented bounding boxes.
The basic steps:
Get all vertices cordinates
Build a covariance matrix
Find the eigenvalues
Project all the vertices in the eigenvalue space
Find max and min in every eigenvalue space.
For more information just google OBB "colision detection"
Ps: If you just project all vertices and find maximum and minimum you're making AABB (axis aligned bounding box). Its easier and requires less computational effort, but doesn't guarantee the minimum box.
I'm interpreting your question to mean "For a given 2D polygon, how do you calculate the position of a bounding rectangle for which the angle of orientation is predetermined?"
And I would do it by rotating the polygon against the angle of orientation, then use a simple search for its maximum and minimum points in the two cardinal directions using whatever search algorithm is appropriate for the structure the points of the polygon are stored in. (Simply put, you need to find the highest and lowest X values, and highest and lowest Y values.)
Then the minima and maxima define your rectangle.
You can do the same thing without rotating the polygon first, but your search for minimum and maximum points has to be more sophisticated.
To get a rectangle with minimal area enclosing a polygon, you can use a rotating calipers algorithm.
The key insight is that (unlike in your sample image, so I assume you don't actually require minimal area?), any such minimal rectangle is collinear with at least one edge of (the convex hull of) the polygon.
Here is a python implementation for the answer by #schnaader.
Given a pointset with coordinates x and y and the degree of the rectangle to bound those points, the function returns a point set with the four corners (and a repetition of the first corner).
def BoundingRectangleAnglePoints(x,y, alphadeg):
#convert to radians and reverse direction
alpha = np.radians(alphadeg)
#calculate center
cx = np.mean(x)
cy = np.mean(y)
#Translate center to (0,0)
xt = x - cx
yt = y - cy
#Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * np.cos(alpha) - yt * np.sin(alpha)
yr = xt * np.sin(alpha) + yt * np.cos(alpha)
#Find the min and max in rotated space
minx_r = np.min(xr)
miny_r = np.min(yr)
maxx_r = np.max(xr)
maxy_r = np.max(yr)
#Set up the minimum and maximum points of the bounding rectangle
xbound_r = np.asarray([minx_r, minx_r, maxx_r, maxx_r,minx_r])
ybound_r = np.asarray([miny_r, maxy_r, maxy_r, miny_r,miny_r])
#Rotate and Translate back to (cx, cy)
xbound = (xbound_r * np.cos(-alpha) - ybound_r * np.sin(-alpha))+cx
ybound = (xbound_r * np.sin(-alpha) + ybound_r * np.cos(-alpha))+cy
return xbound, ybound