I'm trying to calculate the axis of rotation of a ball which is moving and spinning at the same time, i.e. I want the vector along the axis that the ball is spinning on.
For every frame I know the x, y and z locations of 3 specific points on the surface of the sphere. I assume that by looking at how these 3 points have moved in successive frames, you can calculate the axis of rotation of the ball, however I have very little experience with this kind of maths, any help would be appreciated!
You could use the fact that the direction a position vector moves in will always be perpendicular to the axis of rotation. Therefore, if you have two position vectors v1 and v2 at successive times (for the same point), use
This gives you an equation with three unknowns (the components of w, the rotation axis). If you then plug in all three points you have knowledge of you should be able to solve these simultaneous equations and work out w.
Related
For a project I want to do a very simple Pythagoras calculation in C++. An object is equiped with an IMU sensor that gives either a Quaternion rotation or Euler angles. What I want to know is the opposite sides of the triangle underneath the object.:
I want to know these sides of the triangle for both the X and Y axis (black arrows):
This is pretty much very simple, except for the fact that the object can rotate. When the object is rotated I still want to use the X and Y axis in world space (black arrows), but when yawing the Euler angles of the IMU provide me with pitch and roll, which are in local space (red arrows):
In what way can I still get the world space angles (black arrows) while yawing, to be able to calculate my simple Pythagoras calculation? If I can't get them, is there a way to calculate the opposite sides I want using Quaternions?
We can do the calculation by taking into account the Euler angles in the following order -
Pitch
First of all, as you change the roll of the sensor, the sensor "ray" sweeps out a plane inclined to the horizon at angle pitch. We need to first calculate the closest distance between (i) the line of intersection between the plane and the ground, and (ii) the point directly below the sensor on the ground. This is given by d = h * tan(pitch).
Roll
Next we need to do another trigonometric step. As before the roll sweeps through the plane. The offset distance along the axis perpendicular to the line joining (i) and (ii) is given by f = h / cos(pitch) * tan(roll). This gives the intersection point on the ground to be (d, f)
Yaw
Previously, we considered a frame in which the yaw was zero. We now need to rotate this intersection point around the Z-axis by yaw. Thus the final intersection point is given by (x, y) = (d * cos(yaw) - f * sin(yaw), d * sin(yaw) + f * cos(yaw)). You can calculate the "space angle" you want by taking atan2(y, x).
all the plane definitions i've found use either four numbers (for the plane normal and distance from origin definition) or six numbers (for the plane normal and point that is on the plane definition).
maybe i'm missing something, but shouldn't it be possible to define a plane with only three numbers, (nx, ny, nz) using the direction of the vector as the plane normal and the magnitude of the vector as the distance from the origin?
i am trying to write a game that generates billions of planes, and shaving 25% off of my plane struct would really help.
It is possible, at the cost of recalculating the distance to the origin every time you need it.
If you need a solution using 3 parameters that has no degenerate case, use two direction angles (U, V) and the distance to the origin D.
Equation of the plane: cos(U).X + sin(U).cos(V).Y + sin(U).sin(V).Z = D.
If high accuracy is not mandated, you can store the angles as shorts, with suitable scaling, achieving 0°00'20" resolution. With float D, this packs to 8 bytes per plane.
I have a stack of images (about 180 of them) and there are 2 stars (just basic annotations) on every single image. Hence, the position (x,y) of the two stars are provided initially. The dimensions of all these images are fixed and constant.
The 'distance' between the image is about 1o with the origin to be the center (width/2, height/2) of every single 2D image. Note that, if this is plotted out and interpolated nicely, the stars would actually form a ring of an irregular shape.
The dotted red circle and dotted purple circle are there to give a stronger scent of a 3D space and the arrangement of the 2D images (like a fan). It also indicates that each slice is about 1o apart.
With the provided (x,y) that appeared in the 2D image, how do you get the corresponding (x,y,z) in the 3d space knowing that each image is about 1o apart?
I know that MATLAB had 3D plotting capabilities, how should I go about implementing the solution to the above scenario? (Unfortunately, I have very little experience plotting 3D with MATLAB)
SOLUTION
Based on the accepted answer, I looked up a bit further: spherical coordinate system. Based on the computation of phi, rho and theta, I could reconstruct the ring without problems. Hopefully this helps anyone with similar problems.
I have also documented the solution here. I hope it helps someone out there, too:
http://gray-suit.blogspot.com/2011/07/spherical-coordinate-system.html
I believe the y coordinate stays as is for 3D, so we can treat this as converting 2D x and image angle to an x and z when viewed top down.
The 2D x coordinate is the distance from the origin in 3D space (viewed top down). The image angle is the angle the point makes with respect to the x axis in 3D space (viewed top down). So the x coordinate (distance from orign) and the image angle (angle viewed top down) makes up the x and z coordinates in 3D space (or x and y if viewed top down).
That is a polar coordinate.
Read how to convert from polar to cartesian coordinates to get your 3D x and z coordinates.
I'm not great at maths either, here's my go:
3D coords = (2Dx * cos(imageangle), 2Dy, 2Dx * sin(imageangle))
Given the 2D coordinates (x,y) just add the angle A as a third coordinate: (x,y,A). Then you have 3D.
If you want to have the Anotations move on a circle of radius r in 3D you can just calculate:
you can use (r*cos(phi),r*sin(phi),0) which draws a circle in the XY-plane and rotate it with a 3x3 rotation matrix into the orientation you need.
It is not clear from you question around which axis your rotation is taking place. However, my answer holds for a general rotation axis.
First, put your points in a 3D space, lying on the X-Y plane. This means the points have a 0 z-coordinate. Then, apply a 3D rotation of the desired angle around the desired axis - in your example, it is a one degree rotation. You could calculate the transformation matrix yourself (should not be too hard, google "3D rotation matrix" or similar keywords). However, MATLAB makes it easier, using the viewmtx function, which gives you a 4x4 rotational matrix. The extra (fourth) dimension is dependent on the projection you specify (it acts like a scaling coefficient), but in order to make things simple, I will let MATLAB use its default projection - you can read about it in MATLAB documentation.
So, to make the plot clearer, I assume four points which are the vertices of a square lying on the x-y plane (A(1,1), B(1,-1), C(-1,-1), D(1,-1)).
az = 0; % Angle (degrees) of rotation around the z axis, measured from -y axis.
el = 90; % Angle (degrees) of rotation around the y' axis (the ' indicates axes after the first rotation).
x = [1,-1, -1, 1,1]; y = [1, 1, -1, -1,1]; z = [0,0, 0, 0,0]; % A square lying on the X-Y plane.
[m,n] = size(x);
x4d = [x(:),y(:),z(:),ones(m*n,1)]'; % The 4D version of the points.
figure
for el = 90 : -1 :0 % Start from 90 for viewing directly above the X-Y plane.
T = viewmtx(az, el);
x2d = T * x4d; % Rotated version of points.
plot3 (x2d(1,:), x2d(2,:),x2d(3,:),'-*'); % Plot the rotated points in 3D space.
grid
xlim ([-2,2]);
ylim ([-2,2]);
zlim([-2,2]);
pause(0.1)
end
If you can describe your observation of a real physical system (like a binary star system) with a model, you can use particle filters.
Those filters were developed to locate a ship on the sea, when only one observation direction was available. One tracks the ship and estimates where it is and how fast it moves, the longer one follows, the better the estimates become.
As a followup to my previous question about determining camera parameters I have formulated a new problem.
I have two pictures of the same rectangle:
The first is an image without any transformations and shows the rectangle as it is.
The second image shows the rectangle after some 3d transformation (XYZ-rotation, scaling, XY-translation) is applied. This has caused the rectangle to look a trapezoid.
I hope the following picture describes my problem:
alt text http://wilco.menge.nl/application.data/cms/upload/transformation%20matrix.png
How do determine what transformations (more specifically: what transformation matrix) have caused this tranformation?
I know the pixel locations of the corners in both images, hence i also know the distances between the corners.
I'm confused. Is this a 2d or a 3d problem?
The way I understand it, you have a flat rectangle embedded in 3d space, and you're looking at two 2d "pictures" of it - one of the original version and one based on the transformed version. Is this correct?
If this is correct, then there is not enough information to solve the problem. For example, suppose the two pictures look exactly the same. This could be because the translation is the identity, or it could be because the translation moves the rectangle twice as far away from the camera and doubles its size (thus making it look exactly the same).
This is a math problem, not programming ..
you need to define a set of equations (your transformation matrix, my guess is 3 equations) and then solve it for the 4 transformations of the corner-points.
I've only ever described this using German words ... so the above will sound strange ..
Based on the information you have, this is not that easy. I will give you some ideas to play with, however. If you had the 3D coordinates of the corners, you'd have an easier time. Here's the basic idea.
Move a corner to the origin. Thereafter, rotations will take place about the origin.
Determine vectors of the axes. Do this by subtracting the adjacent corners from the origin point. These will be a local x and y axis for your world.
Determine angles using the vectors. You can use the dot and cross products to determine the angle between the local x axis and the global x axis (1, 0, 0).
Rotate by the angle in step 3. This will give you a new x axis which should match the global x axis and a new local y axis. You can then determine another rotation about the x axis which will bring the y axis into alignment with the global y axis.
Without the z coordinates, you can see that this will be difficult, but this is the general process. I hope this helps.
The solution will not be unique, as Alex319 points out.
If the second image is really a trapezoid as you say, then this won't be too hard. It is a trapezoid (not a parallelogram) because of perspective, so it must be an isosceles trapezoid.
Draw the two diagonals. They intersect at the center of the rectangle, so that takes care of the translation.
Rotate the trapezoid until its parallel sides are parallel to two sides of the original rectangle. (Which two? It doesn't matter.)
Draw a third parallel through the center. Scale this to the sides of the rectangle you chose.
Now for the rotation out of the plane. Measure the distance from the center to one of the parallel sides and use the law of sines.
If it's not a trapezoid, just a quadralateral, then it'll be harder, you'll have to use the angles between the diagonals to find the axis of rotation.
I've asked some questions here and seen this geometric shape mentioned a few times among other geodesic shapes, but I'm curious how exactly would I generate one about a point xyz?
There's a tutorial here.
The essential idea is to start with an icosahedron (which has 20 triangular faces) and to repeatedly subdivide each triangular face into smaller triangles. At each stage, each new point is shifted radially so it is the correct distance from the centre.
The number of stages will determine how many triangles are generated and hence how close the resulting mesh will be to a sphere.
Here is one reference that I've used for subdivided icosahedrons, based on the OpenGL Red Book. The BSD-licensed source code to my iPhone application Molecules contains code for generating simple icosahedrons and loading them into a vertex buffer object for OpenGL ES. I haven't yet incorporated subdivision to improve the quality of the rendering, but it's in my plans.
To tesselate a sphere, most people sub-divide the points linearly, but that does not produce a rounded shape.
For a rounded tesselation, rotate the two points through a series of rotations.
Rotate the second point around z (by the z angle of point 1) to 0
Rotate the second point around y (by the y angle of point 1) to 0 (this logically puts point 1 at the north pole).
Rotate the second point around z to 0 (this logically puts point 1 on the x/y plane, which now becomes a unit circle).
Find the half-angle, compute x and y for the new 3rd point, point 3.
Perform the counter-rotations in the reverse order for steps 3), 2) and 1) to position the 3rd point to its destination.
There are also some mathematical considerations for values near each of the near-0 locations, such as the north and south pole, and the right-most and left-most, and fore-most and aft-most positions, so check those first and perform an additional rotation by pi/4 (45 degrees) if they're at those locations. This prevents floating point math libraries from freaking out and producing wildly out-of-character values for atan2() and other trig functions.
Hope this helps! :-)