I use Blender3D, but the answer might not API-exclusive.
I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is.
I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here:
http://www.2shared.com/file/5qUjmnIs/sample_files.html
Import the animation by selecting an Armature and running the importer on "sba" file.
Do this for both Armatures.
This is how I assign the poses in the real (complex) importer:
matrix_bases = ... # matrix from file
animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy()
pose.bones[bonename].matrix = animation_matrix
If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again.
The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones.
Final 4x4 matrix after constraints and drivers are applied (object
space)
I tried doing something like this:
matrix_basis = ... # matrix from file
animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted())
pose.bones[bonename].matrix = animation_matrix
But it looks worse. Experimented with order of operations, no luck with all.
For the record, in the old 2.4 API this worked like a charm:
matrix_basis = ... # matrix from file
animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis
pose.bones[bonename].poseMatrix = animation_matrix
pose.update()
Link to Blender API ref:
http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData
http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone
'object space' probably does mean relative to the parent bone. You can convert from global to local by multiplying times the inverse of the parent transform's matrix. You may also find that you'll want to multiply by the concatenation of all parent inverse transforms: multiply B1 * inverse(B0), and B2 * (inverse(B1) * inverse(B0)).
Here's some example code that does something similar (in Panda3D, not Blender, but same general idea). We start off with 3 bones with global position and rotation values, parent them together, and convert the global coordinates into the correct local matrices.
# Load three boxes ('bones'), give them global position and rotation
# each is 3 units long, at a 30 degree angle.
self.bone1=loader.loadModel("box.egg")
self.bone1.reparentTo(render)
self.bone2=loader.loadModel("box.egg")
self.bone2.reparentTo(self.bone1)
self.bone3=loader.loadModel("box.egg")
self.bone3.reparentTo(self.bone2)
'''
equivalent code, in local coordinates
self.bone1.setPos(0,0,0)
self.bone1.setHpr(0,0,30)
self.bone2.setPos(0,0,3)
self.bone2.setHpr(0,0,30)
self.bone3.setPos(0,0,3)
self.bone3.setHpr(0,0,30)
'''
# give each a global rotation value
R1=Mat4()
R1.setRotateMat(30,Vec3(0,1,0))
R2=Mat4()
R2.setRotateMat(60,Vec3(0,1,0))
R3=Mat4()
R3.setRotateMat(90,Vec3(0,1,0))
# set global translation values
T1=Mat4()
# position of bone 2 in global coords
T2 = Mat4.translateMat(1.271,0,2.606)
# position of bone 3 in global coords
T3 = Mat4.translateMat(3.782,0,4.036)
# set the matrix for bone 1
M1 = R1 * T1
self.bone1.setMat(M1)
# get inverse of matrix of parent
I1 = Mat4()
I1.invertFrom (M1)
# multiply bone2 matrix times inverse of parent
M2 = R2 * T2
M2 = M2 * I1
self.bone2.setMat(M2)
# get inverse of parent for next bone
I2 = Mat4()
I2.invertFrom(M2)
M3 = R3 * T3
# notice that M3 * I2 isn't enough - needs to be M3 * (I1 * I2)
M3 = M3 * (I1 * I2)
self.bone3.setMat(M3)
Related
I am currently trying to determine the area inside specfic contour lines on a Mollweide map projection using Basemap. Specifically, what I'm looking for is the area of various credible intervals in square degrees (or degrees2) of an astronomical event on the celestial sphere. The plot is shown below:
Fortunately, a similar question has already been answered before that helps considerably. The method outlined in the answer is able to account for holes within the contour as well which is a necessity for my use case. My adapted code for this particular method is provided below:
# generate a regular lat/lon grid.
nlats = 300; nlons = 300; delta_lon = 2.*np.pi/(nlons-1); delta_lat = np.pi/(nlats-1)
lats = (0.5*np.pi-delta_lat*np.indices((nlats,nlons))[0,:,:])
lons = (delta_lon*np.indices((nlats,nlons))[1,:,:] - np.pi)
map = Basemap(projection='moll',lon_0=0, celestial=True)
# compute native map projection coordinates of lat/lon grid
x, y = map(lons*180./np.pi, lats*180./np.pi)
areas = []
cred_ints = [0.5,0.9]
for k in range(len(cred_ints)):
cs = map.contourf(x,y,p1,levels=[0.0,cred_ints[k]]) ## p1 is the cumulative distribution across all points in the sky (usually determined via KDE on the data)
##organizing paths and computing individual areas
paths = cs.collections[0].get_paths()
#help(paths[0])
area_of_individual_polygons = []
for p in paths:
sign = 1 ##<-- assures that area of first(outer) polygon will be summed
verts = p.vertices
codes = p.codes
idx = np.where(codes==Path.MOVETO)[0]
vert_segs = np.split(verts,idx)[1:]
code_segs = np.split(codes,idx)[1:]
for code, vert in zip(code_segs,vert_segs):
##computing the area of the polygon
area_of_individual_polygons.append(sign*Polygon(vert[:-1]).area)
sign = -1 ##<-- assures that the other (inner) polygons will be subtracted
##computing total area
total_area = np.sum(area_of_individual_polygons)
print(total_area)
areas.append(total_area)
print(areas)
As far as I can tell this method works beautifully... except for one small wrinkle: this calculates the area using the projected coordinate units. I'm not entirely sure what the units are in this case but they are definitely not degrees2 (the calculated areas are on the order of 1013 units2... maybe the units are pixels?). As alluded to earlier, what I'm looking for is how to calculate the equivalent area in the global coordinate units, i.e. in degrees2.
Is there a way to convert the area calculated in the projected domain back into the global domain in square degrees? Or perhaps is there a way to modify this method so that it determines the area in degrees2 from the get go?
Any help will be greatly appreciated!
For anyone that comes across this question, while I didn't figure out a way to directly convert the projected area back into the global domain, I did develop a new solution by transforming the contour path vertices (but this time defined in the lat/lon coordinate system) via an area preserving sinusoidal projection:
where φ is the latitude, λ is the longitude, and λ0 is the longitude of the central meridian.
This flat projection means you can just use the package Shapely to determine the area of the polygon defined by the projected vertices (in square units for a radius of 1 unit, or more simply steradians). Multiplying this number by (180/π)2 will give you the area in square degrees for the contour in question.
Fortunately, only minor adjustments to the code mentioned in the OP was needed to achieve this. The final code is provided below:
# generate a regular lat/lon grid.
nlats = 300; nlons = 300;
delta_lat = np.pi/(nlats-1); delta_lon = 2.*np.pi/(nlons-1);
lats = (0.5*np.pi-delta_lat*np.indices((nlats,nlons))[0,:,:])
lons = (delta_lon*np.indices((nlats,nlons))[1,:,:])
### FOLLOWING CODE DETERMINES CREDIBLE INTERVAL SKY AREA IN DEG^2 ###
# collect and organize contour data for each credible interval
cred_ints = [0.5,0.9]
ci_areas = []
for k in range(len(cred_ints)):
cs = plt.contourf(lons,lats,p1,levels=[0,cred_ints[k]]) ## p1 is the cumulative distribution across all points in the sky (usually determined via KDE of the dataset in question)
paths = cs.collections[0].get_paths()
##organizing paths and computing individual areas
area_of_individual_polygons = []
for p in paths:
sign = 1 ##<-- assures that area of first(outer) polygon will be summed
vertices = p.vertices
codes = p.codes
idx = np.where(codes==Path.MOVETO)[0]
verts_segs = np.split(vertices,idx)[1:]
for verts in verts_segs:
# transforming the coordinates via an area preserving sinusoidal projection
x = (verts[:,0] - (0)*np.ones_like(verts[:,0]))*np.cos(verts[:,1])
y = verts[:,1]
verts_proj = np.stack((x,y), axis=1)
##computing the area of the polygon
area_of_individual_polygons.append(sign*Polygon(verts_proj[:-1]).area)
sign = -1 ##<-- assures that the other(inner) polygons/holes will be subtracted
##computing total area
total_area = ((180/np.pi)**2)*np.sum(area_of_individual_polygons)
ci_areas.append(total_area)
I've created a 3D-Scene with Blender and computed the Projection Matrix P (Also have information about the Translation T- and Rotation-Matrix R).
Like I mentioned in the title I try to calculate the z-Value or depth to an Vertex (x,y,z) from my given camera C with these Matrices.
Example:
Vertex v = [1.4,1,2.3] and position of camera c = [0,-0.7,10]. The Result should be anything around 10-2.3 = 7.7. Thank you for your help!
Usually rotation matrix is applied before translation. So
transform = R * T
R is the rotation matrix (usually 4 rows and 4 columns)
T is the translation matrix (4 rows and 4 columns)
* is the matrix multiplication wich apply first T and then R
of course I'm assuming you already know how to perform matrix multiplication, I'm not providing anycode because it is not clear if you need python snippet or you are using the exported model somehwere else.
After that you apply the final projection matrix (I'm assuming your projection matrix is already multiplied by view matrix)
final = P * transform
P is the projection matrix ( 4 rows and 4 columns)
transform is your previously obtained (4 rows and 4 columns) matrix
the final matrix is the one that will transform every vector of your 3D model, again here you do a matrix multiplication (but in this case the second operand is a colum vector wich 4th element is 1)
transformedVertex = final * Vec4(originalVertex,1)
transformedVertex is a column vector ( 4x1 matrix)
final is the final matrix (4x4)
a vertex is onl 3 coordinates, so we add 1 to make it (4x1)
* is still matrix multiplication
once transformed the vertex Z value is the one that gets directly mapped into Z buffer and ence into a Depth value.
At this point there is one operation that is done "by convention" and is dividing Z by W to normalize it, then values outside range [0..1] are discarded (nearest than near clip plane or farest than far clip plane).
See also this question:
Why do I divide Z by W?
EDIT:
I may have misinterpreted your question, if you need distance between camera and a point it is simply
function computeDistance( cam, pos)
dx = cam.x-pos.x
dy = cam.y-pos.y
dz = cam.z-pos.z
distance = sqrt( dx*dx + dy*dy + dz*dz)
end function
example
cameraposition = 10,0,0
vertexposition = 2,0,0
the above code
computeDistance ( cameraposition, vertexposition)
outputs
8
Thanks for your help, here is what I was looking for:
Data setup
R rotation matrix 4x4
T translation matrix 4x4
v any vertex in with [x,y,z,1] 4x1
Result
vec4 vector 4x1 (x,y,z,w)
vec4 = R * T * v
The vec4.z value is the result I was looking for. Thanks!
I've been working on getting Catmull-Rom splines working for a side project and am having difficulty getting it to do what I need. I tried the following two implementations and both didn't work for me, and I was unable to track down any errors in my code relative to theirs (which I have to assume has been tested). I'll call theirs the "ABC" solution:
Catmull-rom curve with no cusps and no self-intersections
https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline
I then implemented the following solution (that I call the "Matrix" solution) and it did work using the edited version 3 posts down: https://www.opengl.org/discussion_boards/showthread.php/159518-catmull-rom-spline
However, this Matrix solution just implements Catmull-Rom with a 0.5 'a' value built into the matrix. I'd like to get Chordal working, and thus I need 'a' == 1.
Given that my solution for the ABC version was causing problems, I've attempted to use the matrix here (http://algorithmist.net/docs/catmullrom.pdf) to pass in my own 'a'. Here's the original 0.5 code followed by my modified code that's passing in a user specified 'a'.
Original Code:
float u2 = u * u;
float u3 = u2 * u;
return ((2 * x1) +
(-x0 + x2) * u +
(2*x0 - 5*x1 + 4*x2 - x3) * u2 +
(-x0 + 3*x1 - 3*x2 + x3) * u3) * 0.5f;
Modified Code:
float u2 = u * u;
float u3 = u2 * u;
static float a = 0.5f;
return ((1.0f * x1) +
((-a*x0) + (a*x2)) * u +
((2.0f*a)*x0 + (a-3.0f)*x1 + (3.0f-(2.0f*a))*x2 + (-a*x3)) * u2 +
((-a*x0) + (2.0f-a)*x1 + (a-2.0f)*x2 + (a*3.0f)) * u3) * 0.5f;
This of course doesn't work. However, I'm not seeing why. At the bottom of page 4 in the pdf it shows the matrix with 'a' in it. I've substituted that in the above modified code and triple checked it, yet the spline is screwed up. It should have given me the same answer. What's doubly confusing is that his results on page 5 take that resulting matrix and multiply it by 0.5 which drops all the /2's off the matrix entries. The final matrix uses THESE values, but the original matrix on page 4 is not 0.5 * matrix, it's just "matrix". Why was this 0.5 arbitrarily added and why does everything break without it?
Anyway, does anyone know what I might be doing wrong with my equation? Can I use this matrix form to pass in my own 'a' from 0-1 and create uniform, centripetal and chordal splines or will I have to use the ABC form?
Thanks in advance!
I think the matrix with 'a' in page 4 of the pdf file is still for uniform Catmull-Rom (CR) spline. The parameter 'a' is the tension parameter. In the Wiki page (https://en.wikipedia.org/wiki/Centripetal_Catmull%E2%80%93Rom_spline), it also use a 'alpha' for controlling the knot sequences assigned to each point. Don't confuse the tension parameter 'a' with this 'alpha'.
A "standard" uniform CR spline will have alpha=0.0 (which will result in a=0.5). You will need to use alpha=1.0 for chordal CR spline and alpha=0.5 for centripetal CR spline. Their corresponding matrix form will both involve the knot sequences of the points. So, using a=1.0 in the matrix form for uniform CR spline will not result in a chordal CR spline but a uniform CR spline with stronger tangents at the data points, which typically will cause undesired spline shape.
I have two meshes: mesh A and mesh B.
I'm working with MeshLab and all I need is to align them and then extrapolate a transformation matrix that brings B on A.
When I use the alignment tool, I glue A and I set it as the base mesh.
Then I perform the point base glueing on B and optionally I use the process button.
At the end of the procedure, I save the MeshLab project.
In order to extrapolate the transformation matrix that brings B on A, I just open the .mlp project file (it's a plain text file actually) and I read the data. Unfortunately, what I get is not what I expect.
There are two meshes, of course, A and B. For each of them there is a transformation matrix. I expect that mesh A (the one glued and set as base mesh) has an identity matrix, while mesh B has the transformation matrix needed to bring B on A.
Sometimes, the matrix of mesh A is close to identity, but still not the identity one.
Here is an example:
<!DOCTYPE MeshLabDocument>
<MeshLabProject>
<MeshGroup>
<MLMesh label="A" filename="A.stl">
<MLMatrix44>
1 3.61241e-09 1.85292e-11 -5.04461e-08
-3.61241e-09 1 3.45518e-10 1.03514e-07
-1.85292e-11 -3.45518e-10 1 5.35603e-09
0 0 0 1
</MLMatrix44>
</MLMesh>
<MLMesh label="B" filename="B.stl">
<MLMatrix44>
-0.670956 -0.741136 -0.0231387 78.366
0.738444 -0.665039 -0.111463 24.2717
0.0672212 -0.0918734 0.993499 33.6056
0 0 0 1
</MLMatrix44>
</MLMesh>
</MeshGroup>
<RasterGroup/>
</MeshLabProject>
Now, my simple assumption is that for some reason MeshLab can't properly bring B onto A. Instead it brings B very close to A, but it needs to minimally adjust A's position too for a best match.
If so, in order to have the best B to A transformation I want to perform the following:
[B matrix] * INVERTED[A matrix] = [B on A matrix]
Is this correct?
The order matters. Matrix Mb brings mesh B to position p, Matrix Ma brings mesh A to position p. Likewise, Ma_invert brings a mesh from position p to the location of mesh A.
Apply Mb to mesh B, now it reached p:
B_transformed = Mb * B
Bring B_transformed from position p to the location of mesh A:
B_aligned = Ma_inverse * B_transformed
So the combined operation should look like:
B_aligned = Ma_inverse * Mb * B
I'm trying to deduct the 2D-transformation parameters from the result.
Given is a large number of samples in an unknown X-Y-coordinate system as well as their respective counterparts in WGS84 (longitude, latitude). Since the area is small, we can assume the target system to be flat, too.
Sadly I don't know which order of scale, rotate, translate was used, and I'm not even sure if there were 1 or 2 translations.
I tried to create a lengthy equation system, but that ended up too complex for me to handle. Basic geometry also failed me, as the order of transformations is unknown and I would have to check every possible combination order.
Is there a systematic approach to this problem?
Figuring out the scaling factor is easy, just choose any two points and find the distance between them in your X-Y space and your WGS84 space and the ratio of them is your scaling factor.
The rotations and translations is a little trickier, but not nearly as difficult when you learn that the result of applying any number of rotations or translations (in 2 dimensions only!) can be reduced to a single rotation about some unknown point by some unknown angle.
Suddenly you have N points to determine 3 unknowns, the axis of rotation (x and y coordinate) and the angle of rotation.
Calculating the rotation looks like this:
Pr = R*(Pxy - Paxis_xy) + Paxis_xy
Pr is your rotated point in X-Y space which then needs to be converted to WGS84 space (if the axes of your coordinate systems are different).
R is the familiar rotation matrix depending on your rotation angle.
Pxy is your unrotated point in X-Y space.
Paxis_xy is the axis of rotation in X-Y space.
To actually find the 3 unknowns, you need to un-scale your WGS84 points (or equivalently scale your X-Y points) by the scaling factor you found and shift your points so that the two coordinate systems have the same origin.
First, finding the angle of rotation: take two corresponding pairs of points P1, P1' and P2, P2' and write out
P1' = R(P1-A) + A
P2' = R(P2-A) + A
where I swapped A = Paxis_xy for brevity. Subtracting the two equations gives:
P2'-P1' = R(P2-P1)
B = R * C
Bx = cos(a) * Cx - sin(a) * Cy
By = cos(a) * Cx + sin(a) * Cy
By + Bx = 2 * cos(a) * Cx
(By + Bx) / (2 * Cx) = cos(a)
...
(By - Bx) / (2 * Cy) = sin(a)
a = atan2(sin(a), cos(a)) <-- to get the right quadrant
And you have your angle, you can also do a quick check that cos(a) * cos(a) + sin(a) * sin(a) == 1 to make sure either you got all the calculations correct or that your system really is an orientation-preserving isometry (consists only of translations and rotations).
Now that we know a we know R and so to find A we do:
P1` = R(P1-A) + A
P1' - R*P1 = (I-R)A
A = (inverse(I-R)) * (P1' - R*P1)
where the inversion of a 2x2 matrix is easy.
EDIT: There is an error in the above, or more specifically one case that needs to be treated separately.
There is one combination of translations and rotations that does not reduce to a single rotation and that is a single translation. You can think of it in terms of fixed points (how many points are unchanged after the operation).
A translation has no fixed points (all points are changed) and a rotation has 1 fixed point (the axis doesn't change). It turns out that two rotations leave 1 fixed point and a translation and a rotation leaves 1 fixed point, which (with a little proof that says the number of fixed points tells you the operation performed) is the reason that arbitrary combinations of these result in a single rotation.
What this means for you is that if your angle comes out as 0 then using the method above will give you A = 0 as well, which is likely incorrect. In this case you have to do A = P1' - P1.
If I understood the question correctly, you have n points (X1,Y1),...,(Xn,Yn), the corresponding points, say, (x1,y1),...,(xn,yn) in another coordinate system, and the former are supposedly obtained from the latter by rotation, scaling and translation.
Note that this data does not determine the fixed point of rotation / scaling, or the order in which the operations "should" be applied. On the other hand, if you know these beforehand or choose them arbitrarily, you will find a rotation, translation and scaling factor that transform the data as supposed to.
For example, you can pick an any point, say, p0 = [X1, Y1]T (column vector) as the fixed point of rotation & scaling and subtract its coordinates from those of two other points to get p2 = [X2-X1, Y2-Y1]T, and p3 = [X3-X1, Y3-Y1]T. Also take the column vectors q2 = [x2-x1, y2-y1]T, q3 = [x3-x1, y3-y1]T. Now [p2 p3] = A*[q2 q3], where A is an unknwon 2x2 matrix representing the roto-scaling. You can solve it (unless you were unlucky and chose degenerate points) as A = [p2 p3] * [q2 q3]-1 where -1 denotes matrix inverse (of the 2x2 matrix [q2 q3]). Now, if the transformation between the coordinate systems really is a roto-scaling-translation, all the points should satisfy Pk = A * (Qk-q0) + p0, where Pk = [Xk, Yk]T, Qk = [xk, yk]T, q0=[x1, y1]T, and k=1,..,n.
If you want, you can quite easily determine the scaling and rotation parameter from the components of A or combine b = -A * q0 + p0 to get Pk = A*Qk + b.
The above method does not react well to noise or choosing degenerate points. If necessary, this can be fixed by applying, e.g., Principal Component Analysis, which is also just a few lines of code if MATLAB or some other linear algebra tools are available.