I have a world reference frame, let'denote it with {R}, and a reference frame attached to a sensor, let's denote it with {C}.
The sensor detect another reference frame {M}, located on a marker.
q represents the quaternion related to the orientation of {C} with respect to {R}.
qq represents the quaternion related to the orientation of {M} with respect to {C}.
I'd like to express the orientation of {M} with respect to {R}.
I'm struggling, because q and qq are not related to the same reference frame, and I can't find an easy way to go on. Matlab and python tf.transformations use different conventions, and I'm doing a mess mixing RPY, rotation matrices and stuff like this... Thanks for your help.
Rotate qq back from M to R frame
qq_R = q^-1 * qq * q
Related
As for the derivation of rotation matrix, I have inquired a lot of information recently and basically understood it. However, my mathematics is not very good, and there may be some mistakes in the specific derivation. However, I am using UE4 recently, and I need to understand it.
What I have learned about the rotation matrix in UE:
The left - handed coordinate system is used in UE.
The rotation order of EULAR in UE is ZYX.
The matrix in UE is row main-order, which is transformed by left multiplication vector.
There's nothing wrong with it.
The specific problems are as follows:
Rotation matrices of axis XYZ in the left-handed coordinate system are:
It's rotated by an Angle a about the z axis:
Image1
It's rotated by an Angle b about the y axis:
Image2
It's rotated by an Angle c about the x axis:
Image3
Multiply the matrix together and the result is as follows:
Image4
Then here is part of the code for FRotationMatrix in UE:
enter image description here
You can see the difference directly by comparison。
But I don't know what's wrong with it? I wish someone would tell me. Thank you very much.
Finally, my English is very poor, here is my question through the interpreter, if you do not describe clearly, please let me know.
I am using two cameras without lens or any other settings in webot to measure the position of an object. To apply the localization, I need to know the focus length, which is the distance from the camera center to the imaging plane center,namely f. I see the focus parameter in the camera node, but when I set it NULL as default, the imaging is still normal. Thus I consider this parameter has no relation with f. In addition, I need to know the width and height of a pixel in the image, namely dx and dy respectively. But I have no idea how to get these information.
This is the calibration model I used, where c means camera and w means world coordinate. I need to calculate xw,yw,zw from u,v. For ideal camera, gama is 0, u0, v0 are just half of the resolution. So my problems exist in fx and fy.
First important thing to know is that in Webots pixels are square, therefore dx and dy are equivalent.
Then in the Camera node, you will find a 'fieldOfView' which will give you the horizontal field of view, using the resolution of the camera you can then compute the vertical field of view too:
2 * atan(tan(fieldOfView * 0.5) / (resolutionX / resolutionY))
Finally, you can also get the near projection plane from the 'near' field of the Camera node.
Note also that Webots cameras are regular OpenGL cameras, you can therefore find more information about the OpenGL projection matrix here for example: http://www.songho.ca/opengl/gl_projectionmatrix.html
I have the normal vector of the plane . I want to convert the 3D points onto a 2D plane maintaining the same distances between them. Basically what I want to do is make the z coordinate of all the points on the plane equal.
How do I go about achieving this and writing a program for it (Preferably C#)? . Are there any good libraries that I can use .
Will this library be useful Point Cloud Library
My objective in doing this is I have several lines(on the same plane) in 3D space and I want to represent these lines in 2D along with their measurements
An example plane of my problem.
I am doing this for an application I am developing in unity using Google ARcore
Ok I have invested a fair amount of time in finding a solution to this problem . I figured out a simple solution to this problem using ARcore as I am doing this using ARCore(Augmented reality SDK provided by Google) . For those who want to achieve this without using ARcore refer these questions Question 1 Question 2 where a new orthonormal basis has to be created or the plane has to be rotated in order to align with the default planes.
For those who are using ARcore in unity , there is a simpler solution given in this issue on GitHub created by me . Basically we can easily create new axes on the 3D plane and record coordinates from this newly created coordinate system.
If you want to project 3d points on a plane, you need to have a 2d coordinate system in mind. A natural one is the one defined by the global axis, but that will work well with one kind of planes (say horizontal) but not another (say vertical).
Another choice of coordinates is the one defined by CenterPose, but it can change every frame. So if you need the 2d points only for 1 frame, this is can be written as:
x_local_axis = DetectedPlane.CenterPose.rotation * Vector3.forward;
z_local_axis = DetectedPlane.CenterPose.rotation * Vector3.right;
// loop over your points
x = Vector3.Dot(your_3d_point, x_local_axis);
z = Vector3.Dot(your_3d_point, z_local_axis);
If you need a 2d coordinate system that is consistent between frames, you probably would want to attach an anchor to any plane of interest, maybe at DetectedPlane.CenterPose, and do the same math as above, but with anchor rotation instead of plane rotation. The x and z axes of the anchor will provide a 2d frame of coordinates that is consistent between frames.
So here , a new local axes are created on the center of the plane and the points obtained would have only 2 coordinates .
I needed this in Unity C# so here's some code that I used.
First of, project the point onto the plane.
Then using the dot values for the targeted transforms right and forward I got the local 2D coordinates.
So if you want the standard coords replace right with (1,0) and forward with (0,1)
public static Vector3 GetClosestPointOnPlane(Vector3 point, Plane3D plane){
Vector3 dir = point - plane.position;//Direction between the plane / Point
plane.normal = plane.normal.normalized;
float dotVal = Vector3.Dot(dir.normalized, plane.normal);//Check if the tow are facing each other.
return point + plane.normal * dir.magnitude * (-dotVal);//Multiplying the angle(DotVal) with the magnitude gives the distance.
}
Intersection.Plane3D tempPlane = new Intersection.Plane3D(transform.position, transform.up);//Plane3D is just a point and a normal.
Vector3 closestPoint = Intersection.GetClosestPointOnPlane(testPoint.position, tempPlane);
float xPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.right);
float yPos = Vector3.Dot((closestPoint - transform.position).normalized, transform.forward);
float dist = (closestPoint - transform.position).magnitude;
planePos = new Vector2(xPos * dist, yPos * dist);//This is the value that your looking for.
#William Martens
FYI: The GetClosestPointOnPlane function is part of some old stuff I made over a decade ago back in school made in C++ converted to C#. It might be based on something in my old school book but I cant say for sure. The rest I made my self after looking around for a while and not finding something that worked.
I'm learning Unity and I have a question.
I dont understand why this line resets the rotation to 0,0,0. All I'm doing is re-assigning its euler values?
transform.rotation =
Quaternion.Euler(transform.rotation.x,
transform.rotation.y,
transform.rotation.z);
The reason I'm doing this is because I need to lock one axis and keep the others changing. So I thought I could do it like this after the changes in x and z axes occur:
transform.rotation = Quaternion.Euler(transform.rotation.x,
LOCKED ROTATION VALUE,
transform.rotation.z);
I'm sure its simple but I cant find out whats wrong.
Thanks in advance.
Here is some documentation with transform.Rotate(). I'd use this one for rotating.
// Here is an example if you want to just rotate the Z axis.
transform.Rotate(new Vector3(0, 0, 10f) * Time.deltaTime);
Also, here is some other documentation on Quaternion. You could use functions like RotateTowards.
If your game object has a rigidbody on it then you can lock the rotation of it in Unity. Click on the game object and under the Rigidbody component click Constraints and then click the axis under Freeze Rotation.
You are using transform.rotation as if it is exposing Euler angles (which it is not). If you want the Euler angles from your rotation, you have to do this:
transform.rotation =
Quaternion.Euler(transform.rotation.eulerAngles.x,
transform.rotation.eulerAngles.y,
transform.rotation.eulerAngles.z);
I have 2 IMU's (Inertial measurement units) and I want to calculate their relative rotation. Unfortunately, the output of the IMU's gives me both quaternions relative to global (I'm assuming that's how quaternions work). However, I need measurements of the rotation of one of the sensors relative to the other. All the while, these two sensors have been rotated from their initial orientation in the global axis.
For example: I have one sensor attached to the chest and the other attached the arm. Both sensors are calibrated to the global axis. If I maintain that orientation, I can calculate the rotations just fine. However, when I rotate my body to a different orientation (90 degrees to the right) and perform the same movement, the sensors are rotating around their local axis but outputting quaternions relative to the global axis (a rotation about the sensors y axis is output as a rotation around the x global axis).
I want the same movements to produce the same quaternions (and thus show the same rotations) regardless of my orientation (laying down, facing left, right, front or backwards)
Basically, I want to have one sensor be the rotating "reference" axis and I want to measure rotational changes with the other sensor relative to the reference sensor (rotating reference axis).
Transformation from one Q1 to other Q2 frame is simply done like
Q1intoQ2 = q1.inversed() * q2;
// check it, q2 = q1 * Q1intoQ2 == q1 * q1.inversed() * q2 == q2