Scaling vectors from a center point? - algorithm

I'm trying to figure out if I have points that make for example a square:
* *
* *
and let's say I know the center of this square.
I want a formula that will make it for eample twice its size but from the center
* *
* *
* *
* *
Therefore the new shape is twice as large and from the center of the polygon. It has to work for any shape not just squares.
I'm looking more for the theory behind it more than the implementation.

If you know the center point cp and a point v in the polygon you would like to scale by scale, then:
v2 = v - cp; // get a vector to v relative to the centerpoint
v2_scaled = v2 * scale; // scale the cp-relative-vector
v1_scaled = v2_scaled + cp; // translate the scaled vector back
This translate-scale-translate pattern can be performed on vectors of any dimension.

If you want the shape twice as large, scale the distance of the coordinates to be sqrt(2) times further from the center.
In other words, let's say your point is at (x, y) and the center is (xcent, ycent). Your new point should be at
(xcent + sqrt(2)*(x - xcent), ycent + sqrt(2)*(y - ycent))
This will scale the distances from the new 'origin', (xcent, ycent) in such a way that the area doubles. (Because sqrt(2)*sqrt(2) == 2).

I'm not sure there's a clean way to do this for all types of objects. For relatively simple ones, you should be able to find the "center" as the average of all the X and Y values of the individual points. To double the size, you find the length and angle of a vector from the center to the point. Double the length of the vector, and retain the same angle to get the new point.
Edit: of course, "twice the size" is open to several interpretations (e.g., doubling the perimeter vs. doubling the area) These would change the multiplier used above, but the basic algorithm would remain essentially the same.

To do what you want you need to perform three operations: translate the square so that its centroid coincides with the origin of the coordinate system, scale the resulting square, translate it back.

Related

Can I calculate a transformation matrix given a set of points?

I'm trying to deduct the 2D-transformation parameters from the result.
Given is a large number of samples in an unknown X-Y-coordinate system as well as their respective counterparts in WGS84 (longitude, latitude). Since the area is small, we can assume the target system to be flat, too.
Sadly I don't know which order of scale, rotate, translate was used, and I'm not even sure if there were 1 or 2 translations.
I tried to create a lengthy equation system, but that ended up too complex for me to handle. Basic geometry also failed me, as the order of transformations is unknown and I would have to check every possible combination order.
Is there a systematic approach to this problem?
Figuring out the scaling factor is easy, just choose any two points and find the distance between them in your X-Y space and your WGS84 space and the ratio of them is your scaling factor.
The rotations and translations is a little trickier, but not nearly as difficult when you learn that the result of applying any number of rotations or translations (in 2 dimensions only!) can be reduced to a single rotation about some unknown point by some unknown angle.
Suddenly you have N points to determine 3 unknowns, the axis of rotation (x and y coordinate) and the angle of rotation.
Calculating the rotation looks like this:
Pr = R*(Pxy - Paxis_xy) + Paxis_xy
Pr is your rotated point in X-Y space which then needs to be converted to WGS84 space (if the axes of your coordinate systems are different).
R is the familiar rotation matrix depending on your rotation angle.
Pxy is your unrotated point in X-Y space.
Paxis_xy is the axis of rotation in X-Y space.
To actually find the 3 unknowns, you need to un-scale your WGS84 points (or equivalently scale your X-Y points) by the scaling factor you found and shift your points so that the two coordinate systems have the same origin.
First, finding the angle of rotation: take two corresponding pairs of points P1, P1' and P2, P2' and write out
P1' = R(P1-A) + A
P2' = R(P2-A) + A
where I swapped A = Paxis_xy for brevity. Subtracting the two equations gives:
P2'-P1' = R(P2-P1)
B = R * C
Bx = cos(a) * Cx - sin(a) * Cy
By = cos(a) * Cx + sin(a) * Cy
By + Bx = 2 * cos(a) * Cx
(By + Bx) / (2 * Cx) = cos(a)
...
(By - Bx) / (2 * Cy) = sin(a)
a = atan2(sin(a), cos(a)) <-- to get the right quadrant
And you have your angle, you can also do a quick check that cos(a) * cos(a) + sin(a) * sin(a) == 1 to make sure either you got all the calculations correct or that your system really is an orientation-preserving isometry (consists only of translations and rotations).
Now that we know a we know R and so to find A we do:
P1` = R(P1-A) + A
P1' - R*P1 = (I-R)A
A = (inverse(I-R)) * (P1' - R*P1)
where the inversion of a 2x2 matrix is easy.
EDIT: There is an error in the above, or more specifically one case that needs to be treated separately.
There is one combination of translations and rotations that does not reduce to a single rotation and that is a single translation. You can think of it in terms of fixed points (how many points are unchanged after the operation).
A translation has no fixed points (all points are changed) and a rotation has 1 fixed point (the axis doesn't change). It turns out that two rotations leave 1 fixed point and a translation and a rotation leaves 1 fixed point, which (with a little proof that says the number of fixed points tells you the operation performed) is the reason that arbitrary combinations of these result in a single rotation.
What this means for you is that if your angle comes out as 0 then using the method above will give you A = 0 as well, which is likely incorrect. In this case you have to do A = P1' - P1.
If I understood the question correctly, you have n points (X1,Y1),...,(Xn,Yn), the corresponding points, say, (x1,y1),...,(xn,yn) in another coordinate system, and the former are supposedly obtained from the latter by rotation, scaling and translation.
Note that this data does not determine the fixed point of rotation / scaling, or the order in which the operations "should" be applied. On the other hand, if you know these beforehand or choose them arbitrarily, you will find a rotation, translation and scaling factor that transform the data as supposed to.
For example, you can pick an any point, say, p0 = [X1, Y1]T (column vector) as the fixed point of rotation & scaling and subtract its coordinates from those of two other points to get p2 = [X2-X1, Y2-Y1]T, and p3 = [X3-X1, Y3-Y1]T. Also take the column vectors q2 = [x2-x1, y2-y1]T, q3 = [x3-x1, y3-y1]T. Now [p2 p3] = A*[q2 q3], where A is an unknwon 2x2 matrix representing the roto-scaling. You can solve it (unless you were unlucky and chose degenerate points) as A = [p2 p3] * [q2 q3]-1 where -1 denotes matrix inverse (of the 2x2 matrix [q2 q3]). Now, if the transformation between the coordinate systems really is a roto-scaling-translation, all the points should satisfy Pk = A * (Qk-q0) + p0, where Pk = [Xk, Yk]T, Qk = [xk, yk]T, q0=[x1, y1]T, and k=1,..,n.
If you want, you can quite easily determine the scaling and rotation parameter from the components of A or combine b = -A * q0 + p0 to get Pk = A*Qk + b.
The above method does not react well to noise or choosing degenerate points. If necessary, this can be fixed by applying, e.g., Principal Component Analysis, which is also just a few lines of code if MATLAB or some other linear algebra tools are available.

How to place svg shapes in a circle?

I'm playing a bit with D3.js and I got most things working. But I want to place my svg shapes in a circle. So I will show the difference in data with color and text. I know how to draw circles and pie charts, but I want to basically have a circle of same size circles. And not have them overlap, the order is irrelevant. I don't know where to start, to find out the x & y for each circle.
If I understand you correctly, this is a fairly standard math question:
Simply loop over some angle variable in the appropriate step size and use sin() and cos() to calculate your x and y values.
For example:
Let's say you are trying to place 3 objects. There are 360 degrees in a circle. So each object is 120 degrees away from the next. If your objects are 20x20 pixels in size, place them at the following locations:
x1 = sin( 0 * pi()/180) * r + xc - 10; y1 = cos( 0 * pi()/180) * r + yc - 10
x2 = sin(120 * pi()/180) * r + xc - 10; y2 = cos(120 * pi()/180) * r + yc - 10
x3 = sin(240 * pi()/180) * r + xc - 10; y3 = cos(240 * pi()/180) * r + yc - 10
Here, r is the radius of the circle and (xc, yc) are the coordinates of the circle's center point. The -10's make sure that the objects have their center (rather than their top left corner) on the circle. The * pi()/180 converts the degrees to radians, which is the unit most implementations of sin() and cos() require.
Note: This places the shapes equally distributed around the circle. To make sure they don't overlap, you have to pick your r big enough. If the objects have simple and identical boundaries, just lay out 10 of them and figure out the radius you need and then, if you need to place 20, make the radius twice as big, for 30 three times as big and so forth. If the objects are irregularly shaped and you want to place them in the optimal order around the circle to find the smallest circle possible, this problem will get extremely messy. Maybe there's a library for this, but I don't have one in the top of my head and since I haven't used D3.js, I'm not sure whether it will provide you with this functionality either.
Here's another approach to this, for shapes of arbitrary size, using D3's tree layout: http://jsfiddle.net/nrabinowitz/5CfGG/
The tree layout (docs, example) will figure out the x,y placement of each item for you, based on a given radius and a function returning the separation between the centers of any two items. In this example, I used circles of varying sizes, so the separation between them is a function of their radii:
var tree = d3.layout.tree()
.size([360, radius])
.separation(function(a, b) {
return radiusScale(a.size) + radiusScale(b.size);
});
Using the D3 tree layout solves the first problem, laying out the items in a circle. The second problem, as #Markus notes, is how to calculate the right radius for the circle. I've taken a slightly rough approach here, for the sake of expediency: I estimate the circumference of the circle as the sum of the diameters of the various items, with a given padding in between, then calculate radius from the circumference:
var roughCircumference = d3.sum(data.map(radiusScale)) * 2 +
padding * (data.length - 1),
radius = roughCircumference / (Math.PI * 2);
The circumference here isn't exact, and this will be less and less accurate the fewer items you have in the circle, but it's close enough for this purpose.

Finding translation and scale on two sets of points to get least square error in their distance?

I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github

Convert OpenGL 4x4 matrix to rotation angles

I am extracting in OpenGL the Model Matrix with
glGetFloatv (GL_MODELVIEW_MATRIX, (float*)x)
And would like to extract from the resulting 4x4 matrix the x,y and z axis rotations. How Can I do that ?
Thanks !
First you should know, that x,y,z axis rotations, called Euler Angles suffer from serious numerical problems. Also they're not unambigous. So either you store a rotation angle and the rotation axis, thus effectively forming a quaternion in disguise, or you stick with the full rotation matrix.
Find the quaternion from a rotation matrix is called an eigenvalue problem. Technically you're determining the eigenvector of the rotation matrix, which is the axis and the magnitude designates the angle.
I'm writing a CAD-like app, so I understand your problem, we 'in the business' know how awful Euler angles are for linear transformations - but the end-user finds them far more intuitive than matrices or quaternions.
For my app I interpreted Ken Shoemake's wonderful algorithm, it's one of the very few that support arbitrary rotation orders. It's from '93, so it's in pure C code - not for the faint hearted!
http://tog.acm.org/resources/GraphicsGems/gemsiv/euler_angle/
Something like this should give you what you're after.
final double roll = Math.atan2(2 * (quat.getW() * quat.getX() + quat.getY() * quat.getZ()),
1 - 2 * (quat.getX() * quat.getX() + quat.getY() * quat.getY()));
final double pitch = Math.asin(2 * (quat.getW() * quat.getY() - quat.getZ() * quat.getY()));
final double yaw = Math.atan2(2 * (quat.getW() * quat.getZ() + quat.getX() * quat.getY()), 1 - 2 * (quat.getY()
* quat.getY() + quat.getZ() * quat.getZ()));
I use this as a utility function to print out camera angles when I'm using SLERP to interpolate between 2 quaternions that I've derived from 2 4x4 matrices (i.e. camera movement between 2 3D points).

Calculating the Bounding Rectangle at an Angle of a Polygon

I have the need to determine the bounding rectangle for a polygon at an arbitrary angle. This picture illustrates what I need to do:
alt text http://kevlar.net/RotatedBoundingRectangle.png
The pink rectangle is what I need to determine at various angles for simple 2d polygons.
Any solutions are much appreciated!
Edit:
Thanks for the answers, I got it working once I got the center points correct. You guys are awesome!
To get a bounding box with a certain angle, rotate the polygon the other way round by that angle. Then you can use the min/max x/y coordinates to get a simple bounding box and rotate that by the angle to get your final result.
From your comment it seems you have problems with getting the center point of the polygon. The center of a polygon should be the average of the coordinate sums of each point. So for points P1,...,PN, calculate:
xsum = p1.x + ... + pn.x;
ysum = p1.y + ... + pn.y;
xcenter = xsum / n;
ycenter = ysum / n;
To make this complete, I also add some formulas for the rotation involved. To rotate a point (x,y) around a center point (cx, cy), do the following:
// Translate center to (0,0)
xt = x - cx;
yt = y - cy;
// Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * cos(alpha) - yt * sin(alpha);
yr = xt * sin(alpha) + yt * cos(alpha);
// Translate back to (cx, cy)
result.x = xr + cx;
result.y = yr + cx;
To get the smallest rectangle you should get the right angle. This can acomplished by an algorithm used in collision detection: oriented bounding boxes.
The basic steps:
Get all vertices cordinates
Build a covariance matrix
Find the eigenvalues
Project all the vertices in the eigenvalue space
Find max and min in every eigenvalue space.
For more information just google OBB "colision detection"
Ps: If you just project all vertices and find maximum and minimum you're making AABB (axis aligned bounding box). Its easier and requires less computational effort, but doesn't guarantee the minimum box.
I'm interpreting your question to mean "For a given 2D polygon, how do you calculate the position of a bounding rectangle for which the angle of orientation is predetermined?"
And I would do it by rotating the polygon against the angle of orientation, then use a simple search for its maximum and minimum points in the two cardinal directions using whatever search algorithm is appropriate for the structure the points of the polygon are stored in. (Simply put, you need to find the highest and lowest X values, and highest and lowest Y values.)
Then the minima and maxima define your rectangle.
You can do the same thing without rotating the polygon first, but your search for minimum and maximum points has to be more sophisticated.
To get a rectangle with minimal area enclosing a polygon, you can use a rotating calipers algorithm.
The key insight is that (unlike in your sample image, so I assume you don't actually require minimal area?), any such minimal rectangle is collinear with at least one edge of (the convex hull of) the polygon.
Here is a python implementation for the answer by #schnaader.
Given a pointset with coordinates x and y and the degree of the rectangle to bound those points, the function returns a point set with the four corners (and a repetition of the first corner).
def BoundingRectangleAnglePoints(x,y, alphadeg):
#convert to radians and reverse direction
alpha = np.radians(alphadeg)
#calculate center
cx = np.mean(x)
cy = np.mean(y)
#Translate center to (0,0)
xt = x - cx
yt = y - cy
#Rotate by angle alpha (make sure to convert alpha to radians if needed)
xr = xt * np.cos(alpha) - yt * np.sin(alpha)
yr = xt * np.sin(alpha) + yt * np.cos(alpha)
#Find the min and max in rotated space
minx_r = np.min(xr)
miny_r = np.min(yr)
maxx_r = np.max(xr)
maxy_r = np.max(yr)
#Set up the minimum and maximum points of the bounding rectangle
xbound_r = np.asarray([minx_r, minx_r, maxx_r, maxx_r,minx_r])
ybound_r = np.asarray([miny_r, maxy_r, maxy_r, miny_r,miny_r])
#Rotate and Translate back to (cx, cy)
xbound = (xbound_r * np.cos(-alpha) - ybound_r * np.sin(-alpha))+cx
ybound = (xbound_r * np.sin(-alpha) + ybound_r * np.cos(-alpha))+cy
return xbound, ybound

Resources