Scaling the component axes of a rotation - rotation

I have an orientation sensor that outputs a rotation (either quaternion or rotation matrix).
I need to apply a calibration to the output which involves scaling the magnitude of the X,Y and Z axes.
My current approach is to deconstruct the rotations in each plane, calculate and scale the component of the two axes and then recalculte the respective angles. I then reconstruct the modified rotation.
Is there a simpler approach that i'm missing?

Related

What would the rotation vectors be for a torus using simple toroidal coordinates (toroidal and poloidal coordinates)?

https://en.wikipedia.org/wiki/Toroidal_and_poloidal_coordinates
For example, I see there are the basic rotation matrices for each axis shown in 1 but how would I find that in toroidal coordinates?

Using a homography matrix and decomposing it to find the orientation of a plane fixed in the centre

I currently have two images of a plane in real life from straight above. One to use as a reference image, and another when the plane has undergone a rotation fixed at the centre of the plane thus changing its orientation. The camera stays at a constant position.
I was wondering if I found the homography matrix of this rotation in opencv and then decomposed the homography matrix in order to find the rotation matrix whether this would yield accurate results and I would be able to find the three angles needed to describe the planes rotation in euclidean coordinates to a reasonable degree of accuracy.
Thanks

How to compute 3D rotation matrix by user movement of control point

I have a projected view of a 3D scene. The 2D points are computed by multiplying the 3D points in homogenous coordinates by a view matrix (which includes a translation and rotation) and a perspective matrix. I want to allow the user to move control points which describe the three axes, and update the rotation matrix based on this.
How do I compute the new rotation matrix given a change in projected 2D coordinates, assuming rotation around the origin? Solving for the position of the end of the single axis has a large degeneracy in the set of possible, but maybe solving for rotation in the axes perpendicular to the moved axis might work.

How to change the SCNCamera rotation according to looksAt and up-vector?

Given are the vector of direction in which the SCNCamera looks and the up vector that points into the upside direction of the camera.
How can the rotation of the camera of each individual axis be calculated?

Computing depthmap from 3D reconstruction model

I'm using VisualSfM to build the 3D reconstruction of a scene. Now I want to estimate the depthmap and reproject the image. Any idea on how to do it?
If you have the camera intrinsic matrix K, its position vector in the world C and an orientation matrix R that rotates from world space to camera space, you can iterate over all pixels x,y in your image and perform:
Then, find using ray tracing, the minimal t that causes the ray to intersect with your 3D model (assuming it's dense, otherwise interpolate it), so that P lies on your model. The t value you found is then the pixel value of the depth map (perhaps normalized to some range).

Resources