Changing material color on a merged mesh with three js - three.js

Is that possible to interact with the buffer used when merging multiple mesh for changing color on the selected individual mesh ?
It's easy to do such thing with a collection of mesh but what about a merged mesh with multiple different material ?

#hgates, your last comment was very helpful to me, I was looking for the same thing for days !
Ok i set on each face a color, and set to true vertexColor on the
material, that solve the problem ! :)
I write here the whole concept that I used in order to add a proper answer for those who are in the same situation :
// Define a main Geometry used for the final mesh
var mainGeometry = new THREE.Geometry();
// Create a Geometry, a Material and a Mesh shared by all the shapes you want to merge together (here I did 1000 cubes)
var cubeGeometry = new THREE.CubeGeometry( 1, 1, 1 );
var cubeMaterial = new THREE.MeshBasicMaterial({vertexColors: true});
var cubeMesh = new THREE.Mesh( cubeGeometry );
var i = 0;
for ( i; i<1000; i++ ) {
// I set the color to the material for each of my cubes individually, which is just random here
cubeMaterial.color.setHex(Math.random() * 0xffffff);
// For each face of the cube, I assign the color
for ( var j = 0; j < cubeGeometry.faces.length; j ++ ) {
cubeGeometry.faces[ j ].color = cubeMaterial.color;
}
// Each cube is merged to the mainGeometry
THREE.GeometryUtils.merge(mainGeometry, cubeMesh);
}
// Then I create my final mesh, composed of the mainGeometry and the cubeMaterial
var finalMesh = new THREE.Mesh( mainGeometry, cubeMaterial );
scene.add( finalMesh );
Hope it will help as it helped me ! :)

Depends on what you mean with "changing colors". Note that after merging, the mesh is like any other non-merged mesh.
If you mean vertex colors, it would be possibly to iterate over the faces and determine the vertices which color to change based on the material index.
If you mean setting a color to the material itself, sure it's possible. Merged meshes can still have multiple materials the same way ordinary meshes do - in MeshFaceMaterial, though if you are merging yourself, you need to pass in a material index offset parameter for each geometry.

this.meshMaterials.push(new THREE.MeshBasicMaterial(
{color:0x00ff00 * Math.random(), side:THREE.DoubleSide}));
for ( var face in geometry.faces ) {
geometry.faces[face].materialIndex = this.meshMaterials.length-1;
}
var mesh = new THREE.Mesh(geometry);
THREE.GeometryUtils.merge(this.globalMesh, mesh);
var mesh = new THREE.Mesh(this.globalMesh, new THREE.MeshFaceMaterial(this.meshMaterials));
Works like a charm, for those who need example but ! This creates mutliple additional buffers (indices and vertex data) , and multiple drawElements call too :(, i inspect the draw call with webgl inpector, before adding the MeshFaceMaterial : 75 call opengl api running at 60fps easily, after : 3490 call opengl api fps drop about 20 % 45-50 fps, this means that drawElements is called for every mesh, we loose the context of merging meshes, did i miss something here ? i want to share different materials on the same buffer

Related

Three js RayCast between two object in scene

I know how raycast object in scene when click on mouse, but now i need to know if two object in scene can raycast each other.
This is, i load a 3D Object in scene for example Two Rooms in OBJ object, then i add three mesh box in some points, for example two point on first room and one point on second room.
Then two points on first room can raycast each other(have direct vision), but two point for first room can't raycast point on second room.(they don't have vision through room wall).
I attached code used for load scene and points, any sugestion hwo to do?
//LOAD MAIN 3D OBJECT
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.setPath('./asset/3d/');
objLoader.load("model01.obj", function(object){
var mesh = object.children[0];
mesh.castShadow = true;
mesh.receiveShadow = true;
mesh.rotation.x = Math.PI / 2;
var box = new THREE.Box3().setFromObject( object )
var ox = -(box.max.x + box.min.x) / 2;
var oy = -(box.max.y + box.min.y) / 2;
var oz = -(box.max.z + box.min.z) / 2;
mesh.position.set(ox, oy, oz);
_scene.add(mesh);
render();
setTimeout(render, 1000);
}
//LOAD count_points inside scene
for(var i=0;i<cta_points;i++){
var c_r = 2;
var c_geometry = new THREE.BoxBufferGeometry( c_r, c_r, c_r );
var c_material = new THREE.MeshLambertMaterial( { color: new THREE.Color("rgb("+(40 + 30)+", 0, 0)"),opacity: 0.0,
transparent: true} );
var c_mesh = new THREE.Mesh( c_geometry, c_material );
var position = get_positions(i);
c_mesh.position.copy(position);
c_mesh.name="BOX";
scene.add( c_mesh );
}
Possibly take a look at:
How to detect collision in three.js?
Usually, to solve this problem, you would make a collision mask with a collision group.
The collision group is added per object, and is represented by a "bit" in a bitmask,
The wall could be in a separate collision group, like 4 (binary 100)
and the objects could be in another group, say 2 (binary 10)
Then you just need to check collisions of the object against the mask.
(check if the collision group matches against a bitmask (the masks above could be 10, 100,) to check for collisions).
So that way, you can call THREE.Raycaster().intersectObjects(args), where the arguments are the ones that pass the bitmask test ( mask == object.collision_group ).
That way, you won't need to include the wall for collision detection testing, since it is using a separate bitmask.

Three.js performance optimization with 10000 meshes

I load .obj model in Three.js and then create independent meshes from its faces for really interesting animation. But the problem is a very bad performance with so much meshes.
In fact, single mesh with 10000 faces works beautifully. But separated 10000 meshes (created from these faces) work badly - even without animation, just static scene.
How can i optimize performance with saving such animation?
Link: http://intelligence-group.ru/test.html
Here is the code creating meshes:
` obj_loader.load(
'/assets/models/zeus.obj',
function(object) {
var material = new THREE.MeshPhongMaterial( {
color: "#eeeeee",
shading: THREE.FlatShading,
metalness: 0,
roughness: 0.5,
refractionRatio: 0.25
} );
var face = new THREE.Face3( 0, 1, 2 );
for (var i = 0; i < object.children.length; i++) {
var child = object.children[i];
var geometry = new THREE.Geometry().fromBufferGeometry(child.geometry);
for (var i = 0; i < geometry.faces.length; i++) {
var new_geometry = new THREE.Geometry();
var a = geometry.faces[i].a;
var b = geometry.faces[i].b;
var c = geometry.faces[i].c;
new_geometry.vertices.push(geometry.vertices[a]);
new_geometry.vertices.push(geometry.vertices[b]);
new_geometry.vertices.push(geometry.vertices[c]);
new_geometry.faces.push( face );
new_geometry.computeFaceNormals();
var mesh = new THREE.Mesh( new_geometry, material );
group.add( mesh );
}
full_orig_array(group); //animation function - not the reason of bad optimization!
}
scene.add(group);
}
);`
Important: after completion of animation i substitute 10 000 meshes with one single mesh (original object from loader) - and then you can see big improvement of performance. It's not about animation - i checked it: even without animation 10 000 meshes have the same bad performance.
As i understand, it's about different geometries in each mesh. But i don't know how to solve this problem(
Please take into account that i don't duplicate geometry - each mesh's geometry is unique. That is the problem!
There are already a number of answers here on stackoverflow about the performance cost of drawcalls and state-changes so I won't go into that. You NEED to get the number of drawcalls down to render efficiently. How to do that is completely up to your exact problem and your creativity.
My suggestion would be to use a single BufferGeometry: You could just animate all vertex-positions within a single buffer-geometry. You would need to keep the state (translation, rotation, etc) outside of the geometry, but you can write code that freely transforms all of your triangles as if they were single objects.
You get overhead from many drawcalls and webgl state change. Rendering as one mesh is a single draw call vs 10.000.
You can use three's InstancedBufferGeometry to merge these into one call, without duplicating the geometry (thus saving both memory and overhead).
This class unfortunately does not work with default materials, shadows etc. It's a fairly low level struct.
I wrote a further abstraction of this that should work on the same level as THREE.Mesh and work with shadows, AO, depth etc.
https://www.npmjs.com/package/three-instanced-mesh

Applying a matrix in Three.js does not what I expect

For a project I am working on I am trying to get get a 3D-model of a building visible in a browser. Of all the elements of the building I have vertices, indices and a matrix3d. This information comes from an application that uses OpenGL to show the elements in a offline program.
Now I am trying to add these elements to my Three.js scene.
I am at the point that I can add elements to the scene defined by the vertices and indices an I can see them by using materials and lights, but I can not rotate and translate them into the right place. For example I add an element like this:
var m242242255255 = new THREE.MeshPhongMaterial({color:0xf2f2ff, transparent:true, opacity:1, side:THREE.DoubleSide});
var geometry = new THREE.BufferGeometry();
geometry.addAttribute('position', new THREE.BufferAttribute(new Float32Array([821,-15,2825.1,-821,-15,2825.1,-821,-39,2825.1,821,-39,2825.1,-821,-39,54,-821,-15,54,821,-15,54,821,-39,54,-875,-54,0,-821,-54,54,-821,-54,2825.1,821,-54,54,-875,-54,2879.1,821,-54,2825.1,875,-54,0,875,-54,2879.1,875,0,0,821,0,54,821,0,2825.1,-821,0,54,875,0,2879.1,-821,0,2825.1,-875,0,0,-875,0,2879.1]), 3));
geometry.setIndex(new THREE.BufferAttribute(new Uint16Array([8,9,10,9,8,11,12,10,13,10,12,8,11,14,13,14,11,8,13,15,12,15,13,14,16,17,18,17,16,19,20,18,21,18,20,16,19,22,21,22,19,16,21,23,20,23,21,22,8,22,16,16,14,8,14,16,20,20,15,14,15,20,23,23,12,15,12,23,22,22,8,12,13,18,17,17,11,13,11,17,19,19,9,11,9,19,21,21,10,9,10,21,18,18,13,10]), 1));
var mesh = new THREE.Mesh(geometry, m242242255255);
mesh.matrixAutoUpdate = false;
mesh.applyMatrix(new THREE.Matrix4().set(0,0,-1,0, -0.42262,-0.90631,0,0, -0.90631,0.42262,0,0, 64754.68,15569.13,-4647.5,1));
mesh.updateMatrix();
scene.add(mesh);
The element shows up in my scene and it looks like is rotated but it is not translated to its correct position.
I can add the translation before the adding of the mesh to the scene, but it feels like it should not be necessary.
mesh.applyMatrix(new THREE.Matrix4().makeTranslation(-64754.68, -15569.13, -4647.5));
mesh.updateMatrix();
It also looks like the element is rotated along the wrong axis. It is rotated along the x-axis instead of the z-axis. Can someone tell me what it is I am doing wrong? Should I changed the matrix first to be able to use it in Three.js?
Edit:
I just found out that I had to invert my matrix to correct the rotation-problem. So I now have:
var geometry = new THREE.BufferGeometry();
geometry.addAttribute('position', new THREE.BufferAttribute(new Float32Array([821,-15,2825.1,-821,-15,2825.1,-821,-39,2825.1,821,-39,2825.1,-821,-39,54,-821,-15,54,821,-15,54,821,-39,54,-875,-54,0,-821,-54,54,-821,-54,2825.1,821,-54,54,-875,-54,2879.1,821,-54,2825.1,875,-54,0,875,-54,2879.1,875,0,0,821,0,54,821,0,2825.1,-821,0,54,875,0,2879.1,-821,0,2825.1,-875,0,0,-875,0,2879.1]), 3));
geometry.setIndex(new THREE.BufferAttribute(new Uint16Array([8,9,10,9,8,11,12,10,13,10,12,8,11,14,13,14,11,8,13,15,12,15,13,14,16,17,18,17,16,19,20,18,21,18,20,16,19,22,21,22,19,16,21,23,20,23,21,22,8,22,16,16,14,8,14,16,20,20,15,14,15,20,23,23,12,15,12,23,22,22,8,12,13,18,17,17,11,13,11,17,19,19,9,11,9,19,21,21,10,9,10,21,18,18,13,10]), 1));
var mesh = new THREE.Mesh(geometry, m242242255255);
mesh.matrixAutoUpdate = false;
var matrix = new THREE.Matrix4();
matrix.set(0,0,-1,0,-0.42262,-0.90631,0,0,-0.90631,0.42262,0,0,64754.68,15569.13,-4647.5,1);
matrix.getInverse(matrix);
mesh.applyMatrix( matrix );
mesh.updateMatrix();
mesh.applyMatrix( new THREE.Matrix4().makeTranslation( 64754.68, 15569.13, -4647.5 ) );
mesh.updateMatrix();
scene.add(mesh);
But I still have a problem with translating using the matrix. How can I avoid updating the mesh twice?
You need to specify your matrix elements by rows, like so:
matrix.set( n11, n12, n13, n14,
n21, n22, n23, n24,
n31, n32, n33, n34,
n41, n42, n43, n44 );
It is done this way so it is human-readable.
three.js r.76
suppost you ratate in the x, y, z sequence.
rotationMatrix = new THREE.Matrix4().multiplyMatrices(new THREE.Matrix4().makeRotationY(rV.y), new THREE.Matrix4().makeRotationX(rV.x));
rotationMatrix.premultiply(new THREE.Matrix4().makeRotationZ(rV.z));
matrix.copy(rM).setPosition(vector3);
From the documentation for modifying the object's matrix directly
Note that matrixAutoUpdate must be set to false in this case, and you should make sure not to call updateMatrix. Calling updateMatrix will clobber the manual changes made to the matrix, recalculating the matrix from position, scale, and so on.
You'll find that after you call mesh.updateMatrix(), the mesh transformation matrix will be different than the one you set. You verify this by comparing matrix.elements to mesh.matrixWorld.elements, which will be the same after you remove updateMatrix.

Unexpected mesh results from ThreeCSG boolean operation

I am creating a scene & have used a boolean function to cut out holes in my wall. However the lighting reveals that the resultant shapes have messed up faces. I want the surface to look like one solid piece, rather than fragmented and displaying lighting backwards. Does anyone know what could be going wrong with my geometry?
The code that booleans objects is as follows:
//boolean subtract two shapes, convert meshes to bsps, subtract, then convert back to mesh
var booleanSubtract = function (Mesh1, Mesh2, material) {
//Mesh1 conversion
var mesh1BSP = new ThreeBSP( Mesh1 );
//Mesh2 conversion
var mesh2BSP = new ThreeBSP( Mesh2 );
var subtract_bsp = mesh1BSP.subtract( mesh2BSP );
var result = subtract_bsp.toMesh( material );
result.geometry.computeVertexNormals();
return result;
};
I have two lights in the scene:
var light = new THREE.DirectionalLight( 0xffffff, 0.75 );
light.position.set( 0, 0, 1 );
scene.add( light );
//create a point light
var pointLight = new THREE.PointLight(0xFFFFFF);
// set its position
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
// add to the scene
scene.add(pointLight);
EDIT: Using WestLangley's suggestion, I was able to partially fix the wall rendering. And by using material.wireframe=true; I can see that after the boolean operation my wall faces are not merged. Is there a way to merge them?
Your problems are due to two issues.
First, you should be using FlatShading.
Second, as explained in this stackoverflow post, MeshLambert material only calculates the lighting at each vertex, and interpolates the color across each face. MeshPhongMaterial calculates the color at each texel.
You need to use MeshPhongMaterial to avoid the lighting artifacts you are seeing.
three.js r.68

Rendering a large number of colored particles using three.js and the canvas renderer

I am trying to use the Three.js library to display a large number of colored points on the screen (about half a million to million for example). I am trying to use the Canvas renderer rather than the WebGL renderer if possible (The web pages would also be displayed in the Google Earth Client bubbles, which seems to work with Canvas renderer but not the WebGL renderer.)
While I have the problem solved for a small number of points (tens of thousands) by modifying the code from here, I am having trouble scaling it beyond that.
But in the the following code using WebGL and the Particle System I can render half a million random points, but without colors.
...
var particles = new THREE.Geometry();
var pMaterial = new THREE.ParticleBasicMaterial({
color: 0xFFFFFF,
size: 1,
sizeAttenuation : false
});
// now create the individual particles
for (var p = 0; p < particleCount; p++) {
// create a particle with randon position values,
// -250 -> 250
var pX = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pY = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pZ = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
particle = new THREE.Vertex(
new THREE.Vector3(pX, pY, pZ)
);
// add it to the geometry
particles.vertices.push(particle);
}
var particleSystem = new THREE.ParticleSystem(
particles, pMaterial);
scene.add(particleSystem);
...
Is the reason for the better performance of the above code due to the Particle System? From what I have read in the documentation it seems the Particle System can only be used by the WebGL renderer.
So my question(s) are
a) Can I render such large number of particles using the Canvas renderer or is it always going to be slower than the WebGL/ParticleSystem version? If so, how do I go about doing that? What objects and or tricks do I use to improve performance?
b) Is there a compromise I can reach if I give up some features? In other words, can I still use the Canvas renderer for the large dataset if I give up the need to color the individual points?
c) If I have to give up the Canvas and use the WebGL version, is it possible to change the colors of the individual points? It seems the color is set by the material passed to the ParticleSystem and that sets the color for all the points.
EDIT: ParticleSystem and PointCloud has been renamed to Points. In addition, ParticleBasicMaterial and PointCloudMaterial has been renamed to PointsMaterial.
This answer only applies to versions of three.js prior to r.125.
To have a different color for each particle, you need to have a color array as a property of the geometry, and then set vertexColors to THREE.VertexColors in the material, like so:
// vertex colors
var colors = [];
for( var i = 0; i < geometry.vertices.length; i++ ) {
// random color
colors[i] = new THREE.Color();
colors[i].setHSL( Math.random(), 1.0, 0.5 );
}
geometry.colors = colors;
// material
material = new THREE.PointsMaterial( {
size: 10,
transparent: true,
opacity: 0.7,
vertexColors: THREE.VertexColors
} );
// point cloud
pointCloud = new THREE.Points( geometry, material );
Your other questions are a little too general for me to answer, and besides, it depends on exactly what you are trying to do and what your requirements are. Yes, you can expect Canvas to be slower.
EDIT: Updated for three.js r.124

Resources