How to compute 3D rotation matrix by user movement of control point - user-interface

I have a projected view of a 3D scene. The 2D points are computed by multiplying the 3D points in homogenous coordinates by a view matrix (which includes a translation and rotation) and a perspective matrix. I want to allow the user to move control points which describe the three axes, and update the rotation matrix based on this.
How do I compute the new rotation matrix given a change in projected 2D coordinates, assuming rotation around the origin? Solving for the position of the end of the single axis has a large degeneracy in the set of possible, but maybe solving for rotation in the axes perpendicular to the moved axis might work.

Related

Scaling the component axes of a rotation

I have an orientation sensor that outputs a rotation (either quaternion or rotation matrix).
I need to apply a calibration to the output which involves scaling the magnitude of the X,Y and Z axes.
My current approach is to deconstruct the rotations in each plane, calculate and scale the component of the two axes and then recalculte the respective angles. I then reconstruct the modified rotation.
Is there a simpler approach that i'm missing?

How do I find the corners of a plane in 3d space if I know three points

Apologies in advance for my feeble maths.
I'm trying to be able to find the corners of a plane in space based on the equation of that plane. Here's what I know. I know three points on the plane and I know where they fall in the 2d coordinate space of the plane (x,y) and where they are in 3d space. I know the width and height of the plane and I can now calculate the equation of the plane. The plane sits on the inside of a large sphere that surrounds the origin so, in theory, it should more or less face where the camera is (though in my diagram it doesn't face the origin as it's just for illustrative purposes)
But it's not clear to me how I can use that to figure out another point. One thought I had was to find the transform that moves the plane parallel to the xy axis and rotate it round one of the points (so it stays in the same place), find the position of the new point, and then rotate it by the inverse of that transform. But it's not clear to me how I would find that transform matrix or how to use it. Could I do this using the normal and vector maths? I understand what normals are, but I'm fuzzy about how to use them.

Using a homography matrix and decomposing it to find the orientation of a plane fixed in the centre

I currently have two images of a plane in real life from straight above. One to use as a reference image, and another when the plane has undergone a rotation fixed at the centre of the plane thus changing its orientation. The camera stays at a constant position.
I was wondering if I found the homography matrix of this rotation in opencv and then decomposed the homography matrix in order to find the rotation matrix whether this would yield accurate results and I would be able to find the three angles needed to describe the planes rotation in euclidean coordinates to a reasonable degree of accuracy.
Thanks

Transforming a 3D plane onto a 2D coordinate system

Say I have a set of points from a sensor which are all within a margin of error on a 2D plane somewhere in the 3D space. How would I go on about transforming the coordinates of the points onto a 2d coordinate system, so that for example the convex hulls of the points or the distances between the points don't change?
Assuming you know the equation of the plane (otherwise you can fit it by least-square or other), construct a new coordinate frame as follows:
get the normal vector,
form the cross product with an arbitrary vector having a different direction;
form the cross product of the normal and the second vector,
normalize all three and name the new axis z, x, y.
This creates an orthonormal basis to which you will transform the points. This corresponds to a rigid transform, that preserves all distances. You can drop the z to get the orthogonal projections of the points to the plane.

Reconstructing a 2D shape from its projection in 1D

I have a convex closed shape in 2 D space (on the x-y plane). I do not know what it looks like. I rotate this shape about approximately the center of its bounding box 64 times by 5.625 degrees (360/64). For each rotation I have the x-coordinates of the extreme points of the shape. In other words I know the left and right x extents of the shape for each rotation (assuming an orthographic projection). How do I obtain 64 points on the shape that do not contradict the x projections.
Note that the 2D shape is rotating, but the coordinate axes are not rotating along with it. So if your object is a line, the x projection of each end if plotted will essentially be a sin/cos wave depending on its original orientation.
The higher the number of rotations, if I have the solution - the closer I will get to my actual shape.
In reality I do not know the exact point I am rotating the shape about, however any solution assuming I do know will still be helpful, as I don't mind the reconstruction being imperfect.
We used the straight-forward method to reconstruct.
Projection is a shade of the object.
You start with a bounding 2D box. For each projection you cut away from the 2D shape left and right parts that fall outside of the projection. So, the main function calculates intersection of two convex 2D shapes. You calculate these intersections for each projection.
We have several purple projections P1, P2, P3, P4 of the original green object:
Knowing position of a purple projection build two red rays coming from the end points of a projection and intersect them with the reconstructed object:
The red object was reconstructed using 4 projections. When compared to original green you can see that they are not the same. The more projections you have the less error you'll get in the final result.

Resources