I have a sensor that gives me a quaternion. I convert the quaternion to an axis-angle representation using http://www.euclideanspace.com/maths/geometry/rotations/conversions/quaternionToAngle/. When I hold the sensor on a flat surface and rotate the sensor pressed against the flat surface, I would have expected the axis to be the same and the angle to vary from 0-360 degrees - but this does not happen (the axis varies significantly). Any ideas why? Maybe I don't understand the axis-angle representation?
Related
I have an orientation sensor that outputs a rotation (either quaternion or rotation matrix).
I need to apply a calibration to the output which involves scaling the magnitude of the X,Y and Z axes.
My current approach is to deconstruct the rotations in each plane, calculate and scale the component of the two axes and then recalculte the respective angles. I then reconstruct the modified rotation.
Is there a simpler approach that i'm missing?
I have a pendulum and an IMU sensor that is capable to give me Euler angles and Quaternions. I want to attach the sensor on pendulum and measure θ angle around x axis from the starting position (see the following image). The problem is that the pendulum rotates itself (around z axis) and this is a problem according to Euler singularities (gimbal lock). So when the pendulum is at 180 degrees and rotates around z-axis i get wrong θ angle.
How can i solve this issue and get the correct θ angle?
example of pendulum and sensor
Edit:
Lets say that we attach the sensor on pendulum such as x-axis is parallel to ceiling and z-axis parallel to pendulum's thread.
Cordinate system of the sensor
In my program (using MATLAB), I specified(through dragging) the pedestrian lane as my Region Of Interest (ROI) with the coordinates [7, 178, 620, 190] (in xmin, ymin, width, and height respectively) using the getrect, roipoly and insertshape function. Refer to the image below.
The video from where this snapshot is taken is in 640x480 pixels resolution (480p)
Defining a real world space as my ROI by mouse dragging is barbaric. That's why the ROI coordinates must be derived mathematically.
What I'm going at is using real-world measurements from the video capturing site and use the Pythagorean Theorem from where the camera is positioned:
How do I obtain the equivalent pixel coordinates and parameters using the real-world measurements?
I'll try to split your question into 2 smaller questions.
A) How do I obtain the equivalent pixel coordinates of an interesting
point? (pratical question)
Your program shoudl be able to retrieve/reconnaise a feature/marker that you positioned in the "real-world" interesting point. The output is a coordinate in pixel. This can be done quite easily (think about QR-codes, for example)
B) What is the analytical relationship between 1 point in 3D space and
its pixel coordinate in the image? (theoretical question)
This is the projection equation based on the pinhole camera model. X,Y,Z 3D coordinates are related with x,y pixel coordinates
Cool, but some detail have to be explained (and there will be any "automatic short formula")
s represent the scale factor. A single pixel in an image could be the projection of infinite different point, due to perspective. In your photo, a pixel containing a piece of a car (when the car is present) will be the same pixel that contain a piece of street under the car (when the car is passed).
So there is not an univocal relationship starting from pixels coordinates
The matrix on the left involves the camera parameters (focal length, etc.) which are called intrinsic parameters. They have to be known to build the relationship between 3D coordinates and pixel coordinates
The matrix on the right seems to be trivial, is the combination of an identity matrix which represents rotation and a column array of zeros which represents translation. Something like T = [R|t].
Which rotation, which translation? You have to consider that every set of coordinates is implicitly expressed in its own reference system. So you have to determine the relationship between the reference system of your measurement and the camera reference system: not only to retrieve position of the camera in your 3D space with euclidean geometry, but also orientation of the camera (angles).
Given are the vector of direction in which the SCNCamera looks and the up vector that points into the upside direction of the camera.
How can the rotation of the camera of each individual axis be calculated?
I have a 3d object which is free to rotate along x,y and z axis and it is then saved as a transform matrix. In a case where the sequence of rotation is not known and the object is rotated for more than 3 times (eg :-if i rotate the object x-60degress, y-30 degrees, z-45 degrees then again x->30 degrees), is it possible to extract the angles rotated from the transform matrix?.I know that it is possible to get angles if the sequence of rotation is known, but if I have only the final transform matrix with me and nothing else, is it possible to get the angles rotated(x,y,and z) from the transform matrix ?
Euler angle conversion is a pretty well known topic. Just normalize the matrix orientation vectors and then use something like this c source code.
The matrix is the current state of things it has no knowledge of what the transformation has been in the past. It does not know how the matrix was built. You can just take the matrix into and decompose it into any pieces you like, as long as:
The data do not overlap. For example:Two X turns after each other is indistinguishable form each other (no way to know if its 1 2 or three different rotations summed).
The sequence order is known
A decomposition can be built out of the data (for example scale can be measured)