I created a custom mesh by adding vertices and faces to a new THREE.Geometry(), then running computeFaceNormals() and computeVertexNormals() on it to smooth out the rendering (I'm using the MeshPhongMaterial). Without computevertexnormals, parts of my mesh appear striped. The problem is that the stock computeVertexNormals() included in r69 ignores sharp edges. It's an elegant function that builds each vertex's normal by averaging the surrounding faces. However it averages the normals at edges that I need to remain sharp in appearance. There were some promising comments on another question with the same topic However no code was posted to solve the issue of keeping edges sharp.
I have attempted to modify computeVertexNormals() to add edge detection but with no luck. My attempt is based on detecting the angle between neighboring faces and only adding their normal to the average if it's within a given threshold. Here's my code:
function computeVertexNormals( object, angle_threshold, areaWeighted ) { //will compute normals if faces diverge less than given angle (in degrees)
var v, vl, f, fl, face, vertices;
angle = angle_threshold * 0.0174532925; //degrees to radians
vertices = new Array( object.vertices.length );
for ( v = 0, vl = object.vertices.length; v < vl; v ++ ) {
vertices[ v ] = new THREE.Vector3();
}
if ( areaWeighted && areaWeighted == true) {
// vertex normals weighted by triangle areas
// http://www.iquilezles.org/www/articles/normals/normals.htm
var vA, vB, vC, vD;
var cb = new THREE.Vector3(), ab = new THREE.Vector3(),
db = new THREE.Vector3(), dc = new THREE.Vector3(), bc = new THREE.Vector3();
for ( f = 0, fl = object.faces.length; f < fl; f ++ ) {
face = object.faces[ f ];
vA = object.vertices[ face.a ];
vB = object.vertices[ face.b ];
vC = object.vertices[ face.c ];
cb.subVectors( vC, vB );
ab.subVectors( vA, vB );
cb.cross( ab );
vertices[ face.a ].add( cb );
vertices[ face.b ].add( cb );
vertices[ face.c ].add( cb );
}
} else {
for ( f = 0, fl = object.faces.length; f < fl; f ++ ) {
face = object.faces[ f ];
vertices[ face.a ].add(face.normal);
vertices[ face.b ].add( face.normal );
vertices[ face.c ].add( face.normal );
}
}
for ( v = 0, vl = object.vertices.length; v < vl; v ++ ) {
vertices[ v ].normalize();
}
for ( f = 0, fl = object.faces.length; f < fl; f ++ ) {
face = object.faces[ f ];
//**********my modifications are all in this last section*************
if(face.normal && face.normal != undefined){
if(vertices[ face.a ].angleTo(face.normal) < angle_threshold){
face.vertexNormals[ 0 ] = vertices[ face.a ].clone();
}else{
face.vertexNormals[ 0 ] = face.normal.clone();
}
if(vertices[ face.b ].angleTo(face.normal) < angle_threshold){
face.vertexNormals[ 1 ] = vertices[ face.b ].clone();
}else{
face.vertexNormals[ 1 ] = face.normal.clone();
}
if(vertices[ face.c ].angleTo(face.normal) < angle_threshold){
face.vertexNormals[ 2 ] = vertices[ face.c ].clone();
}else{
face.vertexNormals[ 2 ] = face.normal.clone();
}
}
}
}
Can anybody please offer a strategy for crease detection so I can have smooth shapes with some sharp edges?
Thanks in advance!
WestLangley is correct in the comment above. To get the sharp edges I wanted, I simply duplicated vertices that were on "creases" while constructing my geometry. Then I used the standard computeVertexNormals() function included in the THREE.Geometry() prototype.
I was constructing my geometry with a home-made 'loft' function: basically iterating through an array of shapes (using i) and creating B-splines between their vertices (using j), then constructing a mesh from the B-Splines. The fix was to test the angle at each vertex of each shape. If its angle was larger than a given threshold (I used 70 degrees), I added the B-Spline a second time, effectively duplicating the vertices. Sorry if the code below is a little cryptic taken out of context.
//test if vertex is on a crease
if (j == 0) {
before = arrCurves[i].vertices[j].clone().sub(arrCurves[i].vertices[arrCurves[i].vertices.length-1]);
}else{
before = arrCurves[i].vertices[j].clone().sub(arrCurves[i].vertices[j-1]);
}
if (j == arrCurves[i].vertices.length-1) {
after = arrCurves[i].vertices[0].clone().sub(arrCurves[i].vertices[j]);
}else{
after = arrCurves[i].vertices[j+1].clone().sub(arrCurves[i].vertices[j]);
}
if( before.angleTo(after) > crease_threshold ){
//here's where I'm adding the curve for a second time to make the 'crease'
arrSplines.push(new THREE.SplineCurve3(nurbsCurve.getPoints(resolution)));
}
Works like a charm, Thanks WestLangley!
Related
Similar to the question in this stack overflow question Three.js: Get updated vertices with morph targets I am interested in how to get the "actual" position of the vertices of a mesh with a skeletal animation.
I have tried printing out the position values, but they are never actually updated (as I am to understand, this is because they are calculated on the GPU, not CPU). The answer to the question above mentioned doing the same computations on the CPU as on the GPU to get the up to date vertex positions for morph target animations, but is there a way to do this same approach for skeletal animations? If so, how??
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function (https://github.com/mrdoob/three.js/blob/master/src/objects/Mesh.js#L115 ). However, I don't see HOW the raycast works with skeletal animation meshes-- how does it know the updated position of the faces?
Thank you!
A similar topic was discussed in the three.js forum some time ago. I've presented there a fiddle which computes the AABB for a skinned mesh per frame. The code actually performs the same vertex displacement via JavaScript like in the vertex shader. The routine looks like so:
function updateAABB( skinnedMesh, aabb ) {
var skeleton = skinnedMesh.skeleton;
var boneMatrices = skeleton.boneMatrices;
var geometry = skinnedMesh.geometry;
var index = geometry.index;
var position = geometry.attributes.position;
var skinIndex = geometry.attributes.skinIndex;
var skinWeigth = geometry.attributes.skinWeight;
var bindMatrix = skinnedMesh.bindMatrix;
var bindMatrixInverse = skinnedMesh.bindMatrixInverse;
var i, j, si, sw;
aabb.makeEmpty();
//
if ( index !== null ) {
// indexed geometry
for ( i = 0; i < index.count; i ++ ) {
vertex.fromBufferAttribute( position, index[ i ] );
skinIndices.fromBufferAttribute( skinIndex, index[ i ] );
skinWeights.fromBufferAttribute( skinWeigth, index[ i ] );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
} else {
// non-indexed geometry
for ( i = 0; i < position.count; i ++ ) {
vertex.fromBufferAttribute( position, i );
skinIndices.fromBufferAttribute( skinIndex, i );
skinWeights.fromBufferAttribute( skinWeigth, i );
// the following code section is normally implemented in the vertex shader
vertex.applyMatrix4( bindMatrix ); // transform to bind space
skinned.set( 0, 0, 0 );
for ( j = 0; j < 4; j ++ ) {
si = skinIndices.getComponent( j );
sw = skinWeights.getComponent( j );
boneMatrix.fromArray( boneMatrices, si * 16 );
// weighted vertex transformation
temp.copy( vertex ).applyMatrix4( boneMatrix ).multiplyScalar( sw );
skinned.add( temp );
}
skinned.applyMatrix4( bindMatrixInverse ); // back to local space
// expand aabb
aabb.expandByPoint( skinned );
}
}
aabb.applyMatrix4( skinnedMesh.matrixWorld );
}
Also, for the morph targets, someone pointed out that this code is already present in the Mesh.raycast function
Yes, you can raycast against morphed meshes. Raycasting against skinned meshes is not supported yet. The code in Mesh.raycast() is already very complex. I think it needs some serious refactoring before it is further enhanced. In the meantime, you can use the presented code snippet to build a solution by yourself. The vertex displacement logic is actually the most complicated part.
Live demo: https://jsfiddle.net/fnjkeg9x/1/
three.js R107
I'm importing a mesh and have already computed the vertex normals for it. I want to use my normals instead of calling computeVertexNormals() for the geometry object. Right now I have
var geometry = THREE.Geometry();
// fill in vertex, face and texture
// ...
// compute normals
geometry.computeFaceNormals();
geometry.computeVertexNormals(); // <-- Would like to replace this
There is a reference in the docs to using a buffer attribute but no examples.
http://threejs.org/docs/index.html?q=vertex#Reference/Core/BufferAttribute
Does anyone know how to do this?
thanks,
john
In the case of a THREE.Geometry object the vertex normals are stored with the faces. If NormalList contains the normal vectors for the vertices then this loop will do it.
/*
* Add the normals. They are added to the face array
*/
for ( f = 0, fl = geometry.faces.length; f < fl; f ++ ) {
var face = geometry.faces[ f ];
var ndx = face.a;
var normal = new THREE.Vector3();
normal.x = NormalList[ndx].x;
normal.y = NormalList[ndx].y;
normal.z = NormalList[ndx].z;
face.vertexNormals[ 0 ] = normal;
ndx = face.b;
normal = new THREE.Vector3();
normal.x = NormalList[ndx].x;
normal.y = NormalList[ndx].y;
normal.z = NormalList[ndx].z;
face.vertexNormals[ 1 ] = normal;
ndx = face.c;
normal = new THREE.Vector3();
normal.x = NormalList[ndx].x;
normal.y = NormalList[ndx].y;
normal.z = NormalList[ndx].z;
face.vertexNormals[ 2 ] = normal;
}
I am splitting a texture 1024 x 1024 over 32x32 tiles * 32, Im not sure if its possible to share the texture with an offset or would i need to create a new texture for each tile with the offset..
to create the offset i am using a uniform value = 32 * i and updating the uniform through each loop instance of creating tile, all the tiles seem to be the same offset? as basically i wanting an image to appear like its one image not broken up into little tiles.But the current out-put is the same x,y-offset on all 32 tiles..Im using the vertex-shader with three.js r71...
Would i need to create a new texture for each tile with the offset?
for ( j = 0; j < row; j ++ ) {
for ( t = 0; t < col; t ++ ) {
customUniforms.tX.value = tX;
customUniforms.tY.value = tY;
console.log(customUniforms.tX.value);
customUniforms.tX.needsUpdate = true;
customUniforms.tY.needsUpdate = true;
mesh = new THREE.Mesh( geometry,mMaterial);// or new material
}
}
//vertex shader :
vec2 uvOffset = vUV + vec2( tX, tY) ;
Image example:
Each image should have an offset of 10 0r 20 px but they are all the same.... this is from using one texture..
As suggested i have tried to manipulate the uv on each object with out luck, it seems to make all the same vertexes have the same position for example 10x10 segmant plane all faces will be the same
var geometry = [
[ new THREE.PlaneGeometry( w, w ,64,64),50 ],
[ new THREE.PlaneGeometry( w, w ,40,40), 500 ],
[ new THREE.PlaneGeometry( w, w ,30,30), 850 ],
[ new THREE.PlaneGeometry( w, w,16,16 ), 1200 ]
];
geometry[0][0].faceVertexUvs[0] = [];
for(var p = 0; p < geometry[0][0].faces.length; p++){
geometry[0][0].faceVertexUvs[0].push([
new THREE.Vector2(0.0, 0.0),
new THREE.Vector2(0.0, 1),
new THREE.Vector2( 1, 1 ),
new THREE.Vector2(1.0, 0.0)]);
}
image of this result, you will notice all vertices are the same when they shouldn't be
Update again:
I have to go through each vertices of faces as two triangles make a quad to avoid the above issue, I think i may have this solved... will update
Last Update Hopfully:
Below is the source code but i am lost making the algorithm display the texture as expected.
/*
j and t are rows & columns looping by 4x4 grid
row = 4 col = 4;
*/
for( i = 0; i < geometry.length; i ++ ) {
var mesh = new THREE.Mesh( geometry[ i ][ 0 ], customMaterial);
mesh.geometry.computeBoundingBox();
var max = mesh.geometry.boundingBox.max;
var min = mesh.geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x*t*j+w, 0- min.y*j+w);//here is my issue
var range = new THREE.Vector2(max.x - min.x*row*2, max.y - min.y*col*2);
mesh.geometry.faceVertexUvs[0] = [];
var faces = mesh.geometry.faces;
for (p = 0; p < mesh.geometry.faces.length ; p++) {
var v1 = mesh.geometry.vertices[faces[p].a];
var v2 = mesh.geometry.vertices[faces[p].b];
var v3 = mesh.geometry.vertices[faces[p].c];
mesh.geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
You will notice the below image in the red is seamless as the other tiles are not aligned with the texture.
Here is the answer:
var offset = new THREE.Vector2(w - min.x-w+(w*t), w- min.y+w+(w*-j+w));
var range = new THREE.Vector2(max.x - min.x*7, max.y - min.y*7);
if you could simplify answer will award bounty too:
I can't find anywhere an explaination about how to use the frames option for ExtrudeGeometry in Three.js. Its documentation says:
extrudePath — THREE.CurvePath. 3d spline path to extrude shape along. (creates Frames if (frames aren't defined)
frames — THREE.TubeGeometry.FrenetFrames. containing arrays of tangents, normals, binormals
but I don't understand how frames must be defined. I think using the "frames" option, passing three arrays for tangents, normals and binormals (calculated in some way), but how to pass them in frames?... Probably (like here for morphNormals):
frames = { tangents: [ new THREE.Vector3(), ... ], normals: [ new THREE.Vector3(), ... ], binormals: [ new THREE.Vector3(), ... ] };
with the three arrays of the same lenght (perhaps corresponding to steps or curveSegments option in ExtrudeGeometry)?
Many thanks for an explanation.
Edit 1:
String.prototype.format = function () {
var str = this;
for (var i = 0; i < arguments.length; i++) {
str = str.replace('{' + i + '}', arguments[i]);
}
return str;
}
var numSegments = 6;
var frames = new THREE.TubeGeometry.FrenetFrames( new THREE.SplineCurve3(spline), numSegments );
var tangents = frames.tangents,
normals = frames.normals,
binormals = frames.binormals;
var tangents_list = [],
normals_list = [],
binormals_list = [];
for ( i = 0; i < numSegments; i++ ) {
var tangent = tangents[ i ];
var normal = normals[ i ];
var binormal = binormals[ i ];
tangents_list.push("({0}, {1}, {2})".format(tangent.x, tangent.y, tangent.z));
normals_list.push("({0}, {1}, {2})".format(normal.x, normal.y, normal.z));
binormals_list.push("({0}, {1}, {2})".format(binormal.x, binormal.y, binormal.z));
}
alert(tangents_list);
alert(normals_list);
alert(binormals_list);
Edit 2
Times ago, I opened this topic for which I used this solution:
var spline = new THREE.SplineCurve3([
new THREE.Vector3(20.343, 19.827, 90.612), // t=0
new THREE.Vector3(22.768, 22.735, 90.716), // t=1/12
new THREE.Vector3(26.472, 23.183, 91.087), // t=2/12
new THREE.Vector3(27.770, 26.724, 91.458), // t=3/12
new THREE.Vector3(31.224, 26.976, 89.861), // t=4/12
new THREE.Vector3(32.317, 30.565, 89.396), // t=5/12
new THREE.Vector3(31.066, 33.784, 90.949), // t=6/12
new THREE.Vector3(30.787, 36.310, 88.136), // t=7/12
new THREE.Vector3(29.354, 39.154, 90.152), // t=8/12
new THREE.Vector3(28.414, 40.213, 93.636), // t=9/12
new THREE.Vector3(26.569, 43.190, 95.082), // t=10/12
new THREE.Vector3(24.237, 44.399, 97.808), // t=11/12
new THREE.Vector3(21.332, 42.137, 96.826) // t=12/12=1
]);
var spline_1 = [], spline_2 = [], t;
for( t = 0; t <= (7/12); t+=0.0001) {
spline_1.push(spline.getPoint(t));
}
for( t = (7/12); t <= 1; t+=0.0001) {
spline_2.push(spline.getPoint(t));
}
But I was thinking the possibility to set the tangent, normal and binormal for the first point (t=0) of spline_2 to be the same of last point (t=1) of spline_1; so I thought if that option, frames, could return in some way useful for the purpose. Could be possible to overwrite the value for a tangent, normal and binormal in the respective list, to obtain the same value for the last point (t=1) of spline_1 and the first point (t=0) of spline_2, so to guide the extrusion? For example, for the tangent at "t=0" of spline_2:
tangents[0].x = 0.301;
tangents[0].y = 0.543;
tangents[0].z = 0.138;
doing the same also for normals[0] and binormals[0], to ensure the same orientation for the last point (t=1) of spline_1 and the first one (t=0) of spline_2
Edit 3
I'm trying to visualize the tangent, normal and binormal for each control point of "mypath" (spline) using ArrowHelper, but, as you can see in the demo (on scene loading, you need zoom out the scene slowly, until you see the ArrowHelpers, to find them. The relative code starts from line 122 to line 152 in the fiddle), the ArrowHelper does not start at origin, but away from it. How to obtain the same result of this reference demo (when you check the "Debug normals" checkbox)?
Edit 4
I plotted two splines that respectively end (blue spline) and start (red spline) at point A (= origin), displaying tangent, normal and binormal vectors at point A for each spline (using cyan color for the blue spline's labels, and yellow color for the red spline's labels).
As mentioned above, to align and make continuous the two splines, I thought to exploit the three vectors (tangent, normal and binormal). Which mathematical operation, in theory, should I use to turn the end face of blue spline in a way that it views the initial face (yellow face) of red spline, so that the respective tangents (D, D'-hidden in the picture), normals (B, B') and binormals (C, C') are aligned? Should I use the ".setFromUnitVectors (vFrom, VTO)" method of quaternion? In its documentation I read: << Sets this quaternion to the rotation required to rotate vFrom direction vector to vector direction VTO ... vFrom VTO and are assumed to be normalized. >> So, probably, I need to define three quaternions:
quaternion for the rotation of the normalized tangent D vector in the direction of the normalized tangent D' vector
quaternion for the rotation of the normalized normal B vector in the direction of the normalized normal B' vector
quaternion for the rotation of the normalized binormal C vector in the direction of the normalized binormal C' vector
with:
vFrom = normalized D, B and C vectors
VTO = normalized D', B' and C' vectors
and apply each of the three quaternions respectively to D, B and C (not normalized)?
Thanks a lot again
Edit 5
I tried this code (looking in the image how to align the vectors) but nothing has changed:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var b1_b2_angle = binormals_1[ binormals_1.length - 1 ].angleTo( binormals_2[ 0 ] ); // angle between binormals_1 (at point A of spline 1) and binormals_2 (at point A of spline 2)
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromAxisAngle( normals_1[ normals_1.length - 1 ], b1_b2_angle ); // quaternion equal to a rotation on normal_1 as axis
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis ); // apply quaternion to binormals_1
var n1_n2_angle = normals_1[ normals_1.length - 1 ].angleTo( normals_2[ 0 ] ); // angle between normals_1 (at point A of spline 1) and normals_2 (at point A of spline 2)
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromAxisAngle( binormals_1[ binormals_1.length - 1 ], -n1_n2_angle ); // quaternion equal to a rotation on binormal_1 as axis
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis ); // apply quaternion to normals_1
nothing in this other way also:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromUnitVectors( binormals_1[ binormals_1.length - 1 ].normalize(), binormals_2[ 0 ].normalize() );
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis );
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromUnitVectors( normals_1[ normals_1.length - 1 ].normalize(), normals_2[ 0 ].normalize() );
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis );
So I have a heightmap system which works well enough, however since the THREE.js has updated to r60 which removed the Face4 object, I am having issues.
My code is something like this:
this.buildGeometry = function(){
var geo, len, i, f, y;
geo = new THREE.PlaneGeometry(3000, 3000, 128, 128);
geo.dynamic = true;
geo.applyMatrix(new THREE.Matrix4().makeRotationX(-Math.PI / 2));
this.getHeightData('heightmap.png', function (data) {
len = geo.faces.length;
for(i=0;i<len;i++){
f = geo.faces[i];
if( f ){
y = (data[i].r + data[i].g + data[i].b) / 2;
geo.vertices[f.a].y = y;
geo.vertices[f.b].y = y;
geo.vertices[f.c].y = y;
geo.vertices[f.d].y = y;
}
}
geo.computeFaceNormals();
geo.computeCentroids();
mesh = new THREE.Mesh(geo, new THREE.MeshBasicMaterial({color:0xff0000}) );
scene.add(mesh);
});
};
This works well since a pixel represents each face. How is this done now that the faces are all triangulated?
Similarly I use image maps for model positioning as well. Each pixel matches to the respective Face4 and a desired mesh is placed at its centroid. How can this be accomplished now?
I really miss being able to update the library and do not want to be stuck in r59 anymore =[
This approach works fine on the recent versions (tested on r66).
Notice that the genFn returns the height y given current col and row, maxCol and maxRow (for testing purposes, you can of course replace it with a proper array lookup or from a grayscale image... 64x64 determines the mesh resolution and 1x1 the real world dimensions.
var genFn = function(x, y, X, Y) {
var dx = x/X;
var dy = y/Y;
return (Math.sin(dx*15) + Math.cos(dy * 5) ) * 0.05 + 0.025;
};
var geo = new THREE.PlaneGeometry(1, 1, 64, 64);
geo.applyMatrix(new THREE.Matrix4().makeRotationX(-Math.PI / 2));
var iz, ix,
gridZ1 = geo.widthSegments +1,
gridX1 = geo.heightSegments+1;
for (iz = 0; iz < gridZ1; ++iz) {
for (ix = 0; ix < gridX1; ++ix) {
geo.vertices[ ix + gridX1*iz ].y = genFn(ix, iz, gridX1, gridZ1);
}
}
geo.computeFaceNormals();
geo.computeVertexNormals();
geo.computeCentroids();
var mesh = new THREE.Mesh(
geo,
mtl
);
scene.add(mesh);