Why hello thar. I have a 3D model whose starting angles along its 3 axis are known. Those angles are transformed into a Direction Cosine Matrix using this pattern (information found here) :
New angles values are obtained as time goes by, corresponding to an update of the model's orientation. To take those new values into consideration, I update the DCM with those in this way :
newDCM = oldDCM * newanglesDCM
What I want and how I do it : Now the tricky part. I actually only want the Y component of the rotation matrix. In order to apply a motion on my model, I need it to be flatten so the motion vector doesn't go either in the air or down on the ground. To do so I simply pull the 3 angles back from the rotation matrix and create a new one with the angles [0 Y 0].
The problem I have : When rotation are applied to the model, the DCM is updated. As a motion is detected, I multiply the motion vector [0 Yvalue 0] to the DCM that has been flatten (according to the previous explanation). The result is great when the instantaneous rotations have null or close to null values in X and Z components. But as soon as the model is in a situation where X and Z have significant values, the "orientation" of the model's motion is wrong. If I apply rotations that make it return into a "only Y" situation, it starts being good again.
What could go wrong : Either my Direction Cosine Matrix is wrong, or the technique I use to flatten the matrix is just completely stupid.
Thanks for your help, I'd really appreciate if you could give me a hand on this !
Greg.
EDIT : Example as requested
My model has 3 axis X,Y and Z. This defines the XYZ convention when the model is at rest. At starting point t0, I know the angles dAx, dAy and dAz that allow me to rotate the model from its original configuration to the one it is in at t0. If that bugs you, let's say that the model is at rest at t0, it doesn't matter.
I create the DCM just like explained in the image (let it be an identity matrix if it started at rest).
Every now and then rotations are applied to the model. Those rotations also are made of a dAx, dAy and dAz. Thus I update the rotation matrix (DCM) by multiplying the old one by the newly generated one : newDCM = oldDCM * newanglesDCM.
Now let's say I want the model to move, on the grid, from a point to another. Imagine the grid to be a street for example. Whether the model is oriented towards the sky, on a side or in front of it, I want the motion to be the same : alonside the road and not elevating in the air or diving into the ground.
If I kept the rotation matrix as it is, applying a [0 Y 0] rotation WOULD make it go somewhere I don't want it to. Thus I try to find my old original XZ frame, by flattening the DCM. Then I still have the Y component so I know where in the street the model is moving.
Imagine a character, whose head is the model and who is walking outside. If he looks to a building's window and walks, he won't walk in the air right up to the window - he'll walk to the feet of the building. That is exactly what I want to do :D
What you have to do is equate the elements two rotation matrices
E_1 = Rx(dθx)Ry(dθy)Rz(dθz)
and
E_2 = Rx(dφx)Rz(dφz)Ry(dφy)
in an element by element basis. Find two elemetns that contain only sin(dφy) and cos(dφy) and divide them to get tan(dφy)=... in terms of dθx, dθy and dθz.
I tried to do this with the DCM given but I cannot replicate the sequence of rotations to get what you have. My E_1 above is similar but some signs are different. In my example that I did I got the following expressions
dφy=atan( tan(dθy)/cos(dθz) )
dφz=atan( (cos(dθy)*sin(dθz))/(cos(dθy)*cos(dθz)*cos(dφy)+sin(dθy)*sin(dφy)) )
dφx=atan( (cos(dθx)*sin(dθy)*sin(dθz)+sin(dθx)*cos(dθz))/(cos(dθx)*cos(dθz)- ...
You have to work out your own relationships based on the sequences you use.
Note: that once dφy is known the above E_1 = E_2 equality becomes
Rx(dφx)Rz(dφz) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)
which is solved for dφz and then you have
Rx(dφx) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)Rz(-dφz)
for dφx.
Related
From I've understood, the transform order (rotate first or translate first) yield different [R|t].
So I want to know what's the order of the 4 possible poses you got from essential matrix SVD.
I've read a code implementation of Pose From Essential Matrix(Hartley and Zisserman's multiple view geometry (page 259)). And the author seems to interpret it as rotate first then translate, where he retrieve camera position by using p = -R^T * t.
Also, opencv seems to use trasnlate first then rotate rule. Because the t vector I got from calibrating camera is the position of camera.
Or maybe I have been wrong and the order doesn't matter?
You shouldn't use SVD to decompose a transformation into rotation and translation components. Viewed as x' = M*x = T*R*x, the translation is just the fourth column of M, and the rotation is in the upper-left 3x3 submatrix.
If you feed the whole 4x4 matrix into SVD, I'm not sure what you'll get out, but it won't be useful. (If nothing else, U and V will not be affine.)
I am having trouble calculating 3D penetration vector along one axis. I already implemented SAT and it works. I want to calculate how much i need to offset first box from other so it will always sit on top of other. Kind of doing simple box cast with very long box.
How should i proceed with finding offset which would push one object in direction of specified axis.
The first part of this you should already know; when you project each shape onto each axis, there should be some min and max scalar value for shape A, let's say AMIN and AMAX, and the same for shape B (BMIN / BMAX).
If the objects are apparently colliding on an axis, their projections will overlap, meaning either AMIN < BMIN < AMAX < BMAX or BMIN < AMIN < BMAX < AMAX. Let's assume the first.
The value of AMAX-BMIN is the distance needed to move either shape to bring them into touching contact, and the axis being tested gives you the direction.
Usually, as one iterates through all of the axes, one tracks the minimum value and its corresponding axis, and that becomes the vector needed to un-collide the shapes. (Usually called the 'minimum displacement vector' if you want to Google it.)
For you, wanting to displace them in a specific direction, you would just store the value corresponding to that specific axis, and that becomes your displacement vector (which would then be added to one shape's position to separate them).
I highly recommend Googling "minimum displacement vector sat" and checking out the first few links, specifically this one: http://www.dyn4j.org/2010/01/sat/. It's a bit dense, but it's where I learned everything I know about SAT.
EDIT And...I missed a piece. This is kinda rough, but if you're wanting to displace the shapes along one axis (vertical in your example), based on a displacement vector gained from another axis (the normal of the long side of the bottom box), you need to project the displacement vector onto the desired (normalized) axis (using dot product) to get the proper distance, then combined with your desired axis.
I am writing 3D app for OpenGL ES 2.0 where the user sets a path and flies over some terrain. It's basically a flight simulator on rails.
The path is defined by a series of points created from a spline. Every timeslice I advance the current position using interpolation i.e. I interpolate between p0 to p1, then when I reach p1 I interpolate between p1 and p2, then finally back from pN to p0.
I create a view matrix with something analogous to gluLookAt. The eye coord is the current position, the look at is the next position along the path and an up (0, 0, 1). So the camera looks towards where it is flying to next and Z points towards the sky.
But now I want to "bank" as I turn. i.e. the up vector is not necessarily directly straight up but a changes based on the rate of turn. I know my current direction and my last direction so I could increment or decrement the bank by some amount. The dot product would tell me the angle of turn, and the a cross product would tell me if its to the left or right. I could maintain a bank angle and keep it within the range -/+70 degrees, incrementing or decrementing appropriately.
I assume this is the correct approach but I could spend a long time implementing it to find out it isn't.
Am I on the right track and are there samples which demonstrate what I'm attempting to do?
Since you seem to have a nice smooth plane flying in normal conditions you don't need much... You are almost right in your approach and it will look totally natural. All you need is a cross product between 3 sequential points A, B, C: cross = cross(A-B, C-B). Now cross is the vector you need to turn the plane around the "forward" vector: Naturally the plane's up vector is (-gravitation) usually (0,0,1) and forward vector in point B is C-B (if no interpolation is needed) now "side" vector is side = normalized(cross(forward, up)) here is where you use the banking: side = side + cross*planeCorrectionParameter and then up = cross(normalized(side), normalized(forward)). "planeCorrectionParameter" is a parameter you should play with, in reality it would represent some combination of parameters such as dimensions of wings and hull, air density, gravity, speed, mass...
Note that some cross operations above might need swap in parameter order (cross(a,b) should be cross(b,a)) so play around a bit with that.
Your approach sounds correct but it will look unnatural. Take for example a path that looks like a sin function: The plane might be "going" to the left when it's actually going to the right.
I can mention two solutions to your problem. First, you can take the derivative of the spline. I'm assuming your spline is a f(t) function that returns a point (x, y, z). The derivative of a parametric curve is a vector that points to the rotation center: it'll point to the center of a circular path.
A couple of things to note with the above method: the derivative of a straight line is 0, and the vector will also be 0, so you have to fix the up vector manually. Also, you might want to fix this vector so it won't turn upside down.
That works and will look better than your method. But it will still look unnatural for some curves. The best method I can mention is quaternion interpolation, such as Slerp.
At each point of the curve, you also have a "right" vector: the vector that points to the right of the plane. From the curve and this vector, you can calculate the up vector at this point. Then, you use quaternion interpolation to interpolate the up vectors along the curve.
If position and rotation depends only on spline curvature the easiest way will be Numerical differentiation of 3D spline (you will have 2 derivatives one for vertical and one for horizontal components). Your UP and side will be normals to the tangent.
I am trying to implement Inverse Kinematics on a 2D arm(made up of three sticks with joints). I am able to rotate the lowest arm to the desired position. Now, I have some questions:
How can I make the upper arm move alongwith the third so the end point of the arm reaches the desired point. Do I need to use the rotation matrices for both and if yes can someone give me some example or an help and is there any other possibl;e way to do this without rotation matrices???
The lowest arm only moves in one direction. I tried google it, they are saying that cross product of two vectors give the direction for the arm but this is for 3D. I am using 2D and cross product of two 2D vectors give a scalar. So, how can I determine its direction???
Plz guys any help would be appreciated....
Thanks in advance
Vikram
I'll give it a shot, but since my Robotics are two decades in the past, take it with a grain of salt.
The way I learned it, every joint was described by its own rotation matrix, defined relative to its current position and orientation. The coordinate of the whole arm's endpoint was then calculated by combining the rotation matrices together.
This achieved exactly the effect you are looking for: you could move only one joint (change its orientation), and all the other joints followed automatically.
You won't have much chance in getting around matrices here - in fact, if you use homogeneous coordinates, all joint calculations (rotations as well as translations) can be modeled with matrix multiplications. The advantage is that the full arm position can then be described with a single matrix (plus the arm's origin).
With this transformation matrix, you can tackle the inverse kinematic problem: since the transformation matrix' elements will depend on the angles of the joints, you can treat the whole calculation 'endpoint = startpoint x transformation' as a system of equations, and with startpoint and endpoint known, you can solve this system to determine the unknown angles. The difficulty herein lies that the equation may not be solvable, or that there are multiple solutions.
I don't quite understand your second question, though - what are you looking for?
Instead of a rotation matrix, the rotation can be represented by its angle or by a complex number of the unit circle, but it's the same thing really. More importantly, you need a representation T of rigid body transformations, so that you can write stuff like t1 * t2 * t3 to compute the position and orientation of the third link.
Use atan2 to compute the angle between the vectors.
As the following Python example shows, those two things are enough to build a small IK solver.
from gameobjects.vector2 import Vector2 as V
from matrix33 import Matrix33 as T
from math import sin, cos, atan2, pi
import random
The gameobjects library does not have 2D transformations, so you have to write matrix33 yourself. Its interface is just like gameobjects.matrix44.
Define the forward kinematics function for the transformation from one joint to the next. We assume the joint rotates by angle and is followed by a fixed transformation joint:
def fk_joint(joint, angle): return T.rotation(angle) * joint
The transformation of the tool is tool == fk(joints, q) where joints are the fixed transformations and q are the joint angles:
def fk(joints, q):
prev = T.identity()
for i, joint in enumerate(joints):
prev = prev * fk_joint(joint, q[i])
return prev
If the base of the arm has an offset, replace the T.identity() transformation.
The OP is solving the IK problem for position by cyclic coordinate descent. The idea is to move the tool closer to the goal position by adjusting one joint variable at a time. Let q be the angle of a joint and prev be the transformation of the base of the joint. The joint should be rotated by the angle between the vectors to the tool and goal positions:
def ccd_step(q, prev, tool, goal):
a = tool.get_position() - prev.get_position()
b = goal - prev.get_position()
return q + atan2(b.get_y(), b.get_x()) - atan2(a.get_y(), a.get_x())
Traverse the joints and update the tool configuration for every change of a joint value:
def ccd_sweep(joints, tool, q, goal):
prev = T.identity()
for i, joint in enumerate(joints):
next = prev * fk_joint(joint, q[i])
q[i] = ccd_step(q[i], prev, tool, goal)
prev = prev * fk_joint(joint, q[i])
tool = prev * next.get_inverse() * tool
return prev
Note that fk() and ccd_sweep() are the same for 3D; you just have to rewrite fk_joint() and ccd_step().
Construct an arm with n identical links and run cnt iterations of the CCD sweep, starting from a random arm configuration q:
def ccd_demo(n, cnt):
q = [random.uniform(-pi, pi) for i in range(n)]
joints = [T.translation(0, 1)] * n
tool = fk(joints, q)
goal = V(0.9, 0.75) # Some arbitrary goal.
print "i Error"
for i in range(cnt):
tool = ccd_sweep(joints, tool, q, goal)
error = (tool.get_position() - goal).get_length()
print "%d %e" % (i, error)
We can try out the solver and compare the rate of convergence for different numbers of links:
>>> ccd_demo(3, 7)
i Error
0 1.671521e-03
1 8.849190e-05
2 4.704854e-06
3 2.500868e-07
4 1.329354e-08
5 7.066271e-10
6 3.756145e-11
>>> ccd_demo(20, 7)
i Error
0 1.504538e-01
1 1.189107e-04
2 8.508951e-08
3 6.089372e-11
4 4.485040e-14
5 2.601336e-15
6 2.504777e-15
In robotics we most often use DH parameters for the forward and reverse kinematics. Wikipedia has a nice introduction.
The DH (Denavit-Hartenberg) notation is part of the solution. It helps you collect a succinct set of values that describe the mechanics of your robot such as link length and joint type.
From there it becomes easier to calculate forward kinematics. The first think you have to understand is how to translate a coordinate frame from one place to another coordinate frame. For example, given your robot (or the DH table of it), what is the set of rotations and translations you have to apply to one coordinate frame (the world for example) to know the location of a point (or vector) in the robot's wrist coordinate frame.
As you may already know, homogeneous transform matrices are very useful for such transformations. They are 4x4 matrices that encapsulate rotation and translation. Another very useful property of those matrices is that if you have two coordinate frames linked and defined by some rotation and translation, if you multiply the two matrices together, then you just need to multiply your transformation target by the product of that multiplication.
So the DH table will help you build that matrix.
Inverse kinematics is a bit more complicated though and depends on your application. The complication arises from having multiple solutions for the same problem. The greater the number of DOF, the greater the number of solutions.
Think about your arm. Pinch something solid around you. You can move your arm to several locations in the space and still keep your pinching vector unchanged. Solving the inverse kinematics problem involves deciding which solution to choose as well.
So first of all I have such image (and ofcourse I have all points coordinates in 2d so I can regenerate lines and check where they cross each other)
(source: narod.ru)
But hey, I have another Image of same lines (I know thay are same) and new coords of my points like on this image
(source: narod.ru)
So... now Having points (coords) on first image, How can I determin plane rotation and Z depth on second image (asuming first one's center was in point (0,0,0) with no rotation)?
What you're trying to find is called a projection matrix. Determining precise inverse projection usually requires that you have firmly established coordinates in both source and destination vectors, which the images above aren't going to give you. You can approximate using pixel positions, however.
This thread will give you a basic walkthrough of the techniques you need to use.
Let me say this up front: this problem is hard. There is a reason Dan Story's linked question has not been answered. Let provide an explanation for people who want to take a stab at it. I hope I'm wrong about how hard it is, though.
I will assume that the 2D screen coordinates and projection/perspective matrix is known to you. You need to know at least this much (if you don't know the projection matrix, essentially you are using a different camera to look at the world). Let's call each pair of 2D screen coordinates (a_i, b_i), and I will assume the projection matrix is of the form
P = [ px 0 0 0 ]
[ 0 py 0 0 ]
[ 0 0 pz pw]
[ 0 0 s 0 ], s = +/-1
Almost any reasonable projection has this form. Working through the rendering pipeline, you find that
a_i = px x_i / (s z_i)
b_i = py y_i / (s z_i)
where (x_i, y_i, z_i) are the original 3D coordinates of the point.
Now, let's assume you know your shape in a set of canonical coordinates (whatever you want), so that the vertices is (x0_i, y0_i, z0_i). We can arrange these as columns of a matrix C. The actual coordinates of the shape are a rigid transformation of these coordinates. Let's similarly organize the actual coordinates as columns of a matrix V. Then these are related by
V = R C + v 1^T (*)
where 1^T is a row vector of ones with the right length, R is an orthogonal rotation matrix of the rigid transformation, and v is the offset vector of the transformation.
Now, you have an expression for each column of V from above: the first column is { s a_1 z_1 / px, s b_1 z_1 / py, z_1 } and so on.
You must solve the set of equations (*) for the set of scalars z_i, and the rigid transformation defined R and v.
Difficulties
The equation is nonlinear in the unknowns, involving quotients of R and z_i
We have assumed up to now that you know which 2D coordinates correspond to which vertices of the original shape (if your shape is a square, this is slightly less of a problem).
We assume there is even a solution at all; if there are errors in the 2D data, then it's hard to say how well equation (*) will be satisfied; the transformation will be nonrigid or nonlinear.
It's called (digital) photogrammetry. Start Googling.
If you are really interested in this kind of problems (which are common in computer vision, tracking objects with cameras etc.), the following book contains a detailed treatment:
Ma, Soatto, Kosecka, Sastry, An Invitation to 3-D Vision, Springer 2004.
Beware: this is an advanced engineering text, and uses many techniques which are mathematical in nature. Skim through the sample chapters featured on the book's web page to get an idea.