GLSL - Hierarchical scale - matrix

I have 4 points scattered in 3D space and I want to calculate their positions after a series of transformations.
The points are linked together following this hierarchy:
A -> B //[A] is parent of [B]
B -> C //[B] is parent of [C]
C -> D //[C] is parent of [D]
This means that, if a rotation is applied to A. For each point in the hierarchy, I need to:
1) Translate it the amount of A
2) Rotate it
3) Translate it back to its position.
My problem, however, has to do with the Scale. I don't want the scale to be passed down the hierarchy and I can't figure out what ratio of the parent's scale should be applied so that the position of its children stays consistent.
In this example, the scale factor of [A] goes from 1.0 to 2.0.
As you can you can see, I worked out a formula that does the trick when every points are equally separated, but it won't work if the points are scattered in space or I if start adding some Rotation and Translations.
For now, in my code, if I need to evaluate [C]:
1) I Get the position of [B]
2) Apply a Translation matrix of -PositionOfB
3) Apply the Scale matrix of B
4) Apply the Rotation matrix of B
5) Apply a Translation matrix of PositionOfB
6) Apply the Translation matrix of B
-
7) Get the position of [A]
8) Apply a Translation matrix of -PositionOfA
...
And at that step, I don't understand how I can calculate the correct scale factor to be applied.
Any ideas?
Should I apply the Scale only the first children and apply a Translation to the rest?
How is it calculated in every 3D engine?

Related

Pose from essential matrix, the order of transform

From I've understood, the transform order (rotate first or translate first) yield different [R|t].
So I want to know what's the order of the 4 possible poses you got from essential matrix SVD.
I've read a code implementation of Pose From Essential Matrix(Hartley and Zisserman's multiple view geometry (page 259)). And the author seems to interpret it as rotate first then translate, where he retrieve camera position by using p = -R^T * t.
Also, opencv seems to use trasnlate first then rotate rule. Because the t vector I got from calibrating camera is the position of camera.
Or maybe I have been wrong and the order doesn't matter?
You shouldn't use SVD to decompose a transformation into rotation and translation components. Viewed as x' = M*x = T*R*x, the translation is just the fourth column of M, and the rotation is in the upper-left 3x3 submatrix.
If you feed the whole 4x4 matrix into SVD, I'm not sure what you'll get out, but it won't be useful. (If nothing else, U and V will not be affine.)

Rotate line around center

I have to use a propriertary graphics-engine for drawing a line. I can rotate the whole drawing by its origin point (P1). What I want, is to rotate it around its center point(M). So basically that it looks like L_correct instead of L_wrong.
I think, it should be possible to correct it, by moving it from P1 to P2. But I cannot figure out what formula could be used, to determine the distance. It must probably involve the angle, width and height...
So basically my question is, if there is a function to determine x2 and y2 based on my available data?
Let's assume you have a primitive method that rotates a drawing by any given angle phi. What you want is to use that primitive to rotate a drawing D around a point M instead. Here is a sketch of how to proceed.
Translate you drawing by -M, i.e., apply the transformation T(P) = P - M to all points P in your drawing. Let T(D) be the translation of D.
Now use the primitive to rotate T(D) by the desired angle phi. Let R(T(D)) be the result.
Now translate the previous result by M and get the rotated drawing. In other words, use the transformation T'(P) = P + M.
Note that in step 1 above M is mapped to the origin 0, where the rotation primitive is known to work. After rotating in step 2, the opposite translation of step 3 puts back the drawing on its original location as this time 0 is mapped to M.

How do I find a subset of samples in a 2D matrix given a non-axis-aligned rectangle?

I'm writing a small video game prototype and I have a heightmap (2D float array) that will be traversed by objects. I want to be able get the heightmap data under the objects for use in the game.
I currently get a sub-region (yellow) of the heightmap under my objects with an AABB (Axis-aligned bounding box), as I'll be working with data both under and around them. That part is trivial.
However I can't figure out how to find the samples (red) under the objects given a rotated bounding box (not axis aligned). How might I do this?
I might suggest the following scheme:
Calculate AABB of your wheel (by its vertices).
Get the rectangular subgrid of points within this AABB.
For each of these points check whether it lies within your wheel.
In order to do part 3 you'll need to do some math. Suppose that you know unit direction vector D of your wheel, position of its center C, half-length l and half-thickness w. For a point P, you can check the following conditions:
abs( dot(P - C, D)) <= l
abs(cross(P - C, D)) <= w
Here is a bit more complex way to solve the problem, but more efficient. Enumerate only rows of the subgrid obtained with AABB check. For each row you can determine range of points in it within the wheel by using explicit formulas in O(1) time. Then you can enumerate only the points within your wheel. Total time complexity is O(R + A), where R is the nubmer of rows in subgrid within AABB of the wheel, and A is the total number of points within the wheel.
Example implementation in C#:
if (Mathf.Abs(Vector3.Dot (hfSampleGlobalPos - wheelPosePos, wheelPoseRot * Vector3.forward)) <= wheelRadius &&
Mathf.Abs(Vector3.Cross(hfSampleGlobalPos - wheelPosePos, wheelPoseRot * Vector3.forward).y) <= wheelHalfWidth)
{
// Do something with the sample under the wheel here
}
Filling in polygons on a raster is a standard problem in computational geometry. You can look up the words "even-odd rule" to get you started. However, here's a rough outline of what you do:
Loop thru each scan line in you yellow sub-region
Intersect the scan line with each blue polygon edge
Sort the intersection points by x value
Fill interior pixels between intersection points using the even-odd rule to determine interior points
Finally, you have to guard against degenerate cases like intersecting with vertices and horizontal lines.
Also, for simple polygons the eve-odd rule reduces to a point-in-polygon problem.

How to flatten - ignore axis from a Direction Cosine Matrix?

Why hello thar. I have a 3D model whose starting angles along its 3 axis are known. Those angles are transformed into a Direction Cosine Matrix using this pattern (information found here) :
New angles values are obtained as time goes by, corresponding to an update of the model's orientation. To take those new values into consideration, I update the DCM with those in this way :
newDCM = oldDCM * newanglesDCM
What I want and how I do it : Now the tricky part. I actually only want the Y component of the rotation matrix. In order to apply a motion on my model, I need it to be flatten so the motion vector doesn't go either in the air or down on the ground. To do so I simply pull the 3 angles back from the rotation matrix and create a new one with the angles [0 Y 0].
The problem I have : When rotation are applied to the model, the DCM is updated. As a motion is detected, I multiply the motion vector [0 Yvalue 0] to the DCM that has been flatten (according to the previous explanation). The result is great when the instantaneous rotations have null or close to null values in X and Z components. But as soon as the model is in a situation where X and Z have significant values, the "orientation" of the model's motion is wrong. If I apply rotations that make it return into a "only Y" situation, it starts being good again.
What could go wrong : Either my Direction Cosine Matrix is wrong, or the technique I use to flatten the matrix is just completely stupid.
Thanks for your help, I'd really appreciate if you could give me a hand on this !
Greg.
EDIT : Example as requested
My model has 3 axis X,Y and Z. This defines the XYZ convention when the model is at rest. At starting point t0, I know the angles dAx, dAy and dAz that allow me to rotate the model from its original configuration to the one it is in at t0. If that bugs you, let's say that the model is at rest at t0, it doesn't matter.
I create the DCM just like explained in the image (let it be an identity matrix if it started at rest).
Every now and then rotations are applied to the model. Those rotations also are made of a dAx, dAy and dAz. Thus I update the rotation matrix (DCM) by multiplying the old one by the newly generated one : newDCM = oldDCM * newanglesDCM.
Now let's say I want the model to move, on the grid, from a point to another. Imagine the grid to be a street for example. Whether the model is oriented towards the sky, on a side or in front of it, I want the motion to be the same : alonside the road and not elevating in the air or diving into the ground.
If I kept the rotation matrix as it is, applying a [0 Y 0] rotation WOULD make it go somewhere I don't want it to. Thus I try to find my old original XZ frame, by flattening the DCM. Then I still have the Y component so I know where in the street the model is moving.
Imagine a character, whose head is the model and who is walking outside. If he looks to a building's window and walks, he won't walk in the air right up to the window - he'll walk to the feet of the building. That is exactly what I want to do :D
What you have to do is equate the elements two rotation matrices
E_1 = Rx(dθx)Ry(dθy)Rz(dθz)
and
E_2 = Rx(dφx)Rz(dφz)Ry(dφy)
in an element by element basis. Find two elemetns that contain only sin(dφy) and cos(dφy) and divide them to get tan(dφy)=... in terms of dθx, dθy and dθz.
I tried to do this with the DCM given but I cannot replicate the sequence of rotations to get what you have. My E_1 above is similar but some signs are different. In my example that I did I got the following expressions
dφy=atan( tan(dθy)/cos(dθz) )
dφz=atan( (cos(dθy)*sin(dθz))/(cos(dθy)*cos(dθz)*cos(dφy)+sin(dθy)*sin(dφy)) )
dφx=atan( (cos(dθx)*sin(dθy)*sin(dθz)+sin(dθx)*cos(dθz))/(cos(dθx)*cos(dθz)- ...
You have to work out your own relationships based on the sequences you use.
Note: that once dφy is known the above E_1 = E_2 equality becomes
Rx(dφx)Rz(dφz) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)
which is solved for dφz and then you have
Rx(dφx) = Rx(dθx)Ry(dθy)Rz(dθz)Ry(-dφy)Rz(-dφz)
for dφx.

2D Inverse Kinematics Implementation

I am trying to implement Inverse Kinematics on a 2D arm(made up of three sticks with joints). I am able to rotate the lowest arm to the desired position. Now, I have some questions:
How can I make the upper arm move alongwith the third so the end point of the arm reaches the desired point. Do I need to use the rotation matrices for both and if yes can someone give me some example or an help and is there any other possibl;e way to do this without rotation matrices???
The lowest arm only moves in one direction. I tried google it, they are saying that cross product of two vectors give the direction for the arm but this is for 3D. I am using 2D and cross product of two 2D vectors give a scalar. So, how can I determine its direction???
Plz guys any help would be appreciated....
Thanks in advance
Vikram
I'll give it a shot, but since my Robotics are two decades in the past, take it with a grain of salt.
The way I learned it, every joint was described by its own rotation matrix, defined relative to its current position and orientation. The coordinate of the whole arm's endpoint was then calculated by combining the rotation matrices together.
This achieved exactly the effect you are looking for: you could move only one joint (change its orientation), and all the other joints followed automatically.
You won't have much chance in getting around matrices here - in fact, if you use homogeneous coordinates, all joint calculations (rotations as well as translations) can be modeled with matrix multiplications. The advantage is that the full arm position can then be described with a single matrix (plus the arm's origin).
With this transformation matrix, you can tackle the inverse kinematic problem: since the transformation matrix' elements will depend on the angles of the joints, you can treat the whole calculation 'endpoint = startpoint x transformation' as a system of equations, and with startpoint and endpoint known, you can solve this system to determine the unknown angles. The difficulty herein lies that the equation may not be solvable, or that there are multiple solutions.
I don't quite understand your second question, though - what are you looking for?
Instead of a rotation matrix, the rotation can be represented by its angle or by a complex number of the unit circle, but it's the same thing really. More importantly, you need a representation T of rigid body transformations, so that you can write stuff like t1 * t2 * t3 to compute the position and orientation of the third link.
Use atan2 to compute the angle between the vectors.
As the following Python example shows, those two things are enough to build a small IK solver.
from gameobjects.vector2 import Vector2 as V
from matrix33 import Matrix33 as T
from math import sin, cos, atan2, pi
import random
The gameobjects library does not have 2D transformations, so you have to write matrix33 yourself. Its interface is just like gameobjects.matrix44.
Define the forward kinematics function for the transformation from one joint to the next. We assume the joint rotates by angle and is followed by a fixed transformation joint:
def fk_joint(joint, angle): return T.rotation(angle) * joint
The transformation of the tool is tool == fk(joints, q) where joints are the fixed transformations and q are the joint angles:
def fk(joints, q):
prev = T.identity()
for i, joint in enumerate(joints):
prev = prev * fk_joint(joint, q[i])
return prev
If the base of the arm has an offset, replace the T.identity() transformation.
The OP is solving the IK problem for position by cyclic coordinate descent. The idea is to move the tool closer to the goal position by adjusting one joint variable at a time. Let q be the angle of a joint and prev be the transformation of the base of the joint. The joint should be rotated by the angle between the vectors to the tool and goal positions:
def ccd_step(q, prev, tool, goal):
a = tool.get_position() - prev.get_position()
b = goal - prev.get_position()
return q + atan2(b.get_y(), b.get_x()) - atan2(a.get_y(), a.get_x())
Traverse the joints and update the tool configuration for every change of a joint value:
def ccd_sweep(joints, tool, q, goal):
prev = T.identity()
for i, joint in enumerate(joints):
next = prev * fk_joint(joint, q[i])
q[i] = ccd_step(q[i], prev, tool, goal)
prev = prev * fk_joint(joint, q[i])
tool = prev * next.get_inverse() * tool
return prev
Note that fk() and ccd_sweep() are the same for 3D; you just have to rewrite fk_joint() and ccd_step().
Construct an arm with n identical links and run cnt iterations of the CCD sweep, starting from a random arm configuration q:
def ccd_demo(n, cnt):
q = [random.uniform(-pi, pi) for i in range(n)]
joints = [T.translation(0, 1)] * n
tool = fk(joints, q)
goal = V(0.9, 0.75) # Some arbitrary goal.
print "i Error"
for i in range(cnt):
tool = ccd_sweep(joints, tool, q, goal)
error = (tool.get_position() - goal).get_length()
print "%d %e" % (i, error)
We can try out the solver and compare the rate of convergence for different numbers of links:
>>> ccd_demo(3, 7)
i Error
0 1.671521e-03
1 8.849190e-05
2 4.704854e-06
3 2.500868e-07
4 1.329354e-08
5 7.066271e-10
6 3.756145e-11
>>> ccd_demo(20, 7)
i Error
0 1.504538e-01
1 1.189107e-04
2 8.508951e-08
3 6.089372e-11
4 4.485040e-14
5 2.601336e-15
6 2.504777e-15
In robotics we most often use DH parameters for the forward and reverse kinematics. Wikipedia has a nice introduction.
The DH (Denavit-Hartenberg) notation is part of the solution. It helps you collect a succinct set of values that describe the mechanics of your robot such as link length and joint type.
From there it becomes easier to calculate forward kinematics. The first think you have to understand is how to translate a coordinate frame from one place to another coordinate frame. For example, given your robot (or the DH table of it), what is the set of rotations and translations you have to apply to one coordinate frame (the world for example) to know the location of a point (or vector) in the robot's wrist coordinate frame.
As you may already know, homogeneous transform matrices are very useful for such transformations. They are 4x4 matrices that encapsulate rotation and translation. Another very useful property of those matrices is that if you have two coordinate frames linked and defined by some rotation and translation, if you multiply the two matrices together, then you just need to multiply your transformation target by the product of that multiplication.
So the DH table will help you build that matrix.
Inverse kinematics is a bit more complicated though and depends on your application. The complication arises from having multiple solutions for the same problem. The greater the number of DOF, the greater the number of solutions.
Think about your arm. Pinch something solid around you. You can move your arm to several locations in the space and still keep your pinching vector unchanged. Solving the inverse kinematics problem involves deciding which solution to choose as well.

Resources