I have a problem loading a collada mesh animation from file. The skeletons startpose is defined like this
<translate sid="translate">-0.01199548 0.1422831 -0.009544329</translate>
<rotate sid="jointOrientZ">0 0 1 0</rotate>
<rotate sid="jointOrientY">0 1 0 0</rotate>
<rotate sid="jointOrientX">1 0 0 0</rotate>
<rotate sid="rotateZ">0 0 1 -6.883375</rotate>
<rotate sid="rotateY">0 1 0 -10.62618</rotate>
<rotate sid="rotateX">1 0 0 8.255196</rotate>
I figure that the rotation should be done int the order that they are ordered here or am I missing something? I found out how the rotation works that is the first three values define the axis to rotate around and the last value how many degrees.
But for some reason I get a very wierd result. I have the system working for a collada mesh which has a matrix reprensenting this same type of value.
Just like you said, order of pseudo commands:
translateObject()
orientZObject()
orientYObject()
orientXObject()
rotateZObject()
rotateYObject()
rotateXObject()
No other combinations possible. Matrix multiplication is not commutative.
This is working for me when parsing collada into webgl renderer. For matrix operations i use gl-matrix javascript library.
Related
I am currently following this tutorial by Jerome Etienne on generating a procedural city using Three.js. The tutorial uses revision 59 of Three.js while I am working with revision 73.
The problem comes from this line in the tutorial,
THREE.GeometryUtils.merge( cityGeometry, buildingMesh );
The method is no longer available. The new way to accomplish this according to this answer is,
buildingMesh.updateMatrix();
cityGeometry.merge( buildingMesh.geometry, buildingMesh.matrix );
However, when I do this, the location of the roof in the UV map changes.
This is what it looks like when I render the buildings individually.
And this is what it looks like when I merge them. Notice the roof location in the UV map.
Specification of the roof's UV map is per the tutorial. Specifically,
geometry.faceVertexUvs[0][4][0].set( 0, 0 );
geometry.faceVertexUvs[0][4][1].set( 0, 0 );
geometry.faceVertexUvs[0][4][2].set( 0, 0 );
geometry.faceVertexUvs[0][5][0].set( 0, 0 );
geometry.faceVertexUvs[0][5][1].set( 0, 0 );
geometry.faceVertexUvs[0][5][2].set( 0, 0 );
and the buildingMesh is created as follows (in a for loop where n is the number of buildings),
var buildingMesh = new THREE.Mesh( geometry );
What do I need to change or do differently in order for the merged mesh to respect the geometry's UV map?
Here is one that uses the latest three.js version (v79). From your code, I don't see how my update is different from yours, but all roofs are rendered correctly:
https://codepen.io/Sphinxxxx/pen/WrbvEz?editors=0010
Here I try to render a cube/plane geometry using THREE.ShaderMaterial with THREE.ShaderLib['lambert'], it loaded perfectly, but I am struggling to change each face color with opacity value.
As of Three.js 0.127 you can simply use vertexColors, and it supports alpha (opacity) values.
On your material, set
material.vertexColors = true
On your geometry make sure you set the itemSize to include alpha,
geometry.attributes.color.itemSize === 4
then provide 4 values (RGBA) for each vertex instead of 3 (RGB).
Docs:
Material.vertexColors
ShaderMaterial.vertexColors
Example usage:
webgl_geometry_colors
Note, the example uses itemSize 3, without opacity. Make sure you use itemSize 4, and for every value provide 4 numbers instead of 3.
I am trying to do some online mapping with d3, but running into a problem when I try to plot a line between two points.
I have calculated the centroid of two polygons (source and target)
In the code:
var projection = d3.geo.mercator()
.scale(width)
.translate([0, 0]);
var path = d3.geo.path()
.projection(projection);
From the JS console:
> path({type: "LineString",
coordinates: [path.centroid(source_country), path.centroid(target_country)]});
"M277.05056877663407,121.67976219138909L-694.1792414247936,NaN"
Yet, the centroid calculations seem to be working fine (plotting those points shows them on the map)
> [path.centroid(source_country), path.centroid(target_country)]
[
Array[2]
0: 103.89396329123777
1: -41.453727169465765
length: 2
__proto__: Array[0]
,
Array[2]
0: -260.3172155342976
1: -245.57309459883245
length: 2
__proto__: Array[0]
Any ideas why that NaN is appearing at the end of the path generated for my LineString?
The problem here is that you're projecting the lat/lon coordinates twice. The path() operator expects to take lat/lon and project to pixels; the path.centroid() method also expects a lat/lon geometry, and also produces a pixel-based projection.
So when you call path on [path.centroid(...), path.centroid(...)], you're trying to project already-projected coordinates. You get at NaN because the y-position of the pixel coordinates, -245, is out of bounds for a longitude value.
The easiest way to fix this is probably to use d3.svg.line to create the centroid-centroid path. I haven't tested this, but I think it would look like:
var line = d3.svg.line();
line([path.centroid(source_country), path.centroid(target_country)]);
OK just now I met the same error,
for anyone who meet NAN problem:
the format of coordinate must be correct. e.g. for type Polygon, the coordinate must have a 3-level nested array. e.g. [[[1,2],[2,3]]]
coordinates must be float/integer, but not string (e.g. 1 correct, "1" error )
you can inspect the detailed content of the error result, e.g. M...L...Z... and find out where the error is.
I have a floating point data [size: 4000 X 140 ]. I want to convert it to an IplImage in OpenCV. To have an idea about data, I am giving first 8 X 8 entries of that data.These data are very close to zero. So, I am getting a dark image.
-1.14E-04 -4.71E-04 -1.27E-04 2.43E-04 4.58E-04 1.63E-04 2.56E-04 2.86E-04
1.12E-04 -2.80E-04 2.89E-05 -2.18E-04 4.08E-05 -2.23E-04 -7.96E-05 -3.97E-05
-3.98E-04 -2.35E-04 6.11E-04 4.53E-05 4.74E-05 8.02E-05 2.10E-04 1.10E-04
2.08E-04 3.09E-04 -1.34E-04 -2.58E-04 -2.25E-04 -1.74E-04 2.28E-04 2.65E-04
-6.65E-04 -2.94E-04 6.37E-04 -5.16E-05 9.90E-05 1.05E-04 -2.20E-04 -5.49E-05
1.85E-04 5.69E-04 -5.19E-04 -4.98E-05 2.07E-04 -2.00E-05 1.24E-04 1.49E-04
1.54E-04 -4.09E-04 4.29E-04 -7.67E-04 5.19E-04 3.56E-04 -4.82E-04 3.66E-04
-1.71E-04 -5.15E-04 5.71E-04 -5.68E-04 -2.75E-04 -6.17E-05 1.40E-04 2.19E-04
1) when I am multiplying these entries with a factor like 10E4 or 10E5, I can see an image. But image quality is very poor unlike matlab-generated image.
[
MATLAB code corresponding image:
[path,file] = uigetfile;
data = load(strcat(file,path));
figure;
imagesc(data);
colormap(gray);
]
[
OpenCV code sequence:
I created CvMat and filled it with those data.
I prepared IplImage from that CvMat.
I resized the image( 560 X 420 )
]
2) There are many negative data. Should those be consider zero? or Should all the data be added with a number(like 10E-4) to make all entries positive? or should I proceed through someother ways?
3) I changed contrast , brightness. But those seem to be useless.
Try mapping the minimum value to 0.0 and the maximum to 1.0.
I have a rigged (biped) and animated model in 3D Studio Max which I've exported as a .x file.
When I use it the animations run fine, however I've been trying to get the model itself to lean and turn from the hip but I'm having difficulty in finding where in the bone hierarchy I should apply the rotation matrices.
In 3D Studio Max there is a bipe object on the model called Bip01, when I select and rotate it the rotation cascades on all bones above the hip so I assumed that applyhing rotation matrices to the same D3DXFRAME (which has the same name, Bip01) would have the same effect but it does not. What happens is the effect ends up applying to everything in the bone hierarchy so applying transformations to Bip01 is like applying it to the route bone (which it might be as I'm not sure how to tell one bone from the other).
Here's the code where frame transformations are updating and I've added a bit of code attempting to apply the matrix transformation to Bip01, I'm not sure if there is any other relevant code I can show... (the rotation value is just a random value I threw in)
void CAnimInstance::UpdateFrames( MultiAnimFrame* pFrame, D3DXMATRIX* pmxBase )
{
assert( pFrame != NULL );
assert( pmxBase != NULL );
if(strcmp(pFrame->Name, "Bip01") == 0 )
{
D3DXMATRIX rot;D3DXMatrixRotationY(&rot, 3.141);
D3DXMatrixMultiply( &pFrame->TransformationMatrix,
&pFrame->TransformationMatrix,
&rot );
}
D3DXMatrixMultiply( &pFrame->TransformationMatrix,
&pFrame->TransformationMatrix,
pmxBase );
// transform siblings by the same matrix
if( pFrame->pFrameSibling )
UpdateFrames( ( MultiAnimFrame* )pFrame->pFrameSibling, pmxBase );
// transform children by the transformed matrix - hierarchical transformation
if( pFrame->pFrameFirstChild )
{
UpdateFrames( ( MultiAnimFrame* )pFrame->pFrameFirstChild,
&pFrame->TransformationMatrix );
}
}
*What I think I should be doing is finding all the children frames for Bip01 and apply the transform to them but how do I do that?
Bip01 is the root node for the skeleton in Character Studio - which I assume is where you skeleton is set up.
So your code is correct i.e applying a rotation to bip01 and then cascading that down to all it's children will update all bones in the skeleton.
I'm guessing ( and it's a total guess ) that the reason why you aren't seeing this in 3D Studio Max is because it's set up with a bunch of constraints to help the animator.
What I'd suggest doing is finding the bone names - they usually follow a convention of bip01 - L Finger 1 etc then find the name of the hip bone ( it's probably called Bip01 - Hip ).
Alternatively on startup in your code, iterate from Bip01 down all the children and build up a dictionary of all bone names.
I'm sure there's a better answer but as my 3DS max is pretty rusty that's about the best I can do for now :)