Three.mesh got partially blocked by model - three.js

I was intended to fill the space between two THREE.Lines with some color. I tried to use THREE. It looks fine from some angles below the model, but if looking from top to bottom, the mesh area got partially blocked (the image is here). The other THREE.js objects, such as the red points and red lines are not affected by model while the Three.mesh gets screwed up. The following is my code for the Three.mesh:
// push vertices
geom.vertices.push((new THREE.Vector3()).fromArray(borepoint));
// add faces to mesh
for(let i =0; i < geom.vertices.length-2; i++){
for(let j = i + 1; j < geom.vertices.length-1; j++){
for(let k = j + 1; k < geom.vertices.length;k++){
geom.faces.push(new THREE.Face3(i,j,k));
}
}
}
const fillMesh = new THREE.Mesh(geom , new THREE.MeshBasicMaterial({
side: THREE.DoubleSide,
color: 0x2f92d7,
transparent: false,
opacity: 0.5,
}));
Does anyone has any idea about this? Thanks!

In this case, the problem turned out to be .depthTest. The Object is getting hidden behind the terrain geometry due to z-buffering (aka depthTest).
Disabling .deptTest on the blue objects material, fixed it.
Setting .transparent = true on the material, flags to be rendered second, in the transparency pass. Without it being set, drawing order may or may not be correct and the problem could still remain.
One way to ensure that it will always render on top, is to have the geometry in a different scene which you renderer.render after the terrain scene but that may be overkill for this scenario.

Related

set transparent (opacity: 0) to some faces of a sphere geometry

I want to make half the faces of a sphere transparent and the other half colored.
How can I do that?
I've tried to set a transparent color, but it seems it doesn't work that way.
geometry = new THREE.SphereGeometry(1.0, 17, 17);
for (var i = 0; i < geometry.faces.length; i++) {
let x = Math.random();
//Here I'm trying to set a transparent color to half the faces of the sphere.
let color = 0
if (x < 0.5) {
color = '0x000000';
} else {
color = '0xffffff';
}
geometry.faces[i].color.setHex(color);
}
var material = new THREE.MeshPhongMaterial({ vertexColors: THREE.VertexColors });
sphere = new THREE.Mesh(geometry, material);
All the faces of the sphere are colored if I do in the way above.
I want half the faces to be randomly selected and to be transparent so that it will make the light inside the sphere scatter its rays like god rays effect, which is something like the one in the video below.
https://www.youtube.com/watch?v=suqFV7VGsL4
Looking at the GLSL shaders in three.js, three.js does not support alpha on vertex colors. It only uses Red, Green, and Blue, not the alpha.
To use vertex colors to make something transparent you'd need to write a custom shader or modify three.js's shaders

Three.js performance optimization with 10000 meshes

I load .obj model in Three.js and then create independent meshes from its faces for really interesting animation. But the problem is a very bad performance with so much meshes.
In fact, single mesh with 10000 faces works beautifully. But separated 10000 meshes (created from these faces) work badly - even without animation, just static scene.
How can i optimize performance with saving such animation?
Link: http://intelligence-group.ru/test.html
Here is the code creating meshes:
` obj_loader.load(
'/assets/models/zeus.obj',
function(object) {
var material = new THREE.MeshPhongMaterial( {
color: "#eeeeee",
shading: THREE.FlatShading,
metalness: 0,
roughness: 0.5,
refractionRatio: 0.25
} );
var face = new THREE.Face3( 0, 1, 2 );
for (var i = 0; i < object.children.length; i++) {
var child = object.children[i];
var geometry = new THREE.Geometry().fromBufferGeometry(child.geometry);
for (var i = 0; i < geometry.faces.length; i++) {
var new_geometry = new THREE.Geometry();
var a = geometry.faces[i].a;
var b = geometry.faces[i].b;
var c = geometry.faces[i].c;
new_geometry.vertices.push(geometry.vertices[a]);
new_geometry.vertices.push(geometry.vertices[b]);
new_geometry.vertices.push(geometry.vertices[c]);
new_geometry.faces.push( face );
new_geometry.computeFaceNormals();
var mesh = new THREE.Mesh( new_geometry, material );
group.add( mesh );
}
full_orig_array(group); //animation function - not the reason of bad optimization!
}
scene.add(group);
}
);`
Important: after completion of animation i substitute 10 000 meshes with one single mesh (original object from loader) - and then you can see big improvement of performance. It's not about animation - i checked it: even without animation 10 000 meshes have the same bad performance.
As i understand, it's about different geometries in each mesh. But i don't know how to solve this problem(
Please take into account that i don't duplicate geometry - each mesh's geometry is unique. That is the problem!
There are already a number of answers here on stackoverflow about the performance cost of drawcalls and state-changes so I won't go into that. You NEED to get the number of drawcalls down to render efficiently. How to do that is completely up to your exact problem and your creativity.
My suggestion would be to use a single BufferGeometry: You could just animate all vertex-positions within a single buffer-geometry. You would need to keep the state (translation, rotation, etc) outside of the geometry, but you can write code that freely transforms all of your triangles as if they were single objects.
You get overhead from many drawcalls and webgl state change. Rendering as one mesh is a single draw call vs 10.000.
You can use three's InstancedBufferGeometry to merge these into one call, without duplicating the geometry (thus saving both memory and overhead).
This class unfortunately does not work with default materials, shadows etc. It's a fairly low level struct.
I wrote a further abstraction of this that should work on the same level as THREE.Mesh and work with shadows, AO, depth etc.
https://www.npmjs.com/package/three-instanced-mesh

Stretching Texture across Merged Geometries

I am building a first person maze - each wall in the maze is a THREE.PlaneGeometry. As i understand, it is much better practice to have these walls all merged into single object. I have created a class "walls" which does this each time a new "wall" is added:
this.geometry.merge(wall.mesh.geometry, wall.mesh.matrix);
After all the walls are added, I create a material + apply a texture:
this.material = new THREE.MeshBasicMaterial( {
map: this.texture,
side: THREE.DoubleSide
} );
this.mesh = new THREE.Mesh( this.geometry, this.material );
The problem I am having is that the walls are not all the same width and so the texture mapping goes all funky (Image Here) on the merged geo. Each plane receives the entire texture and stretches it to its dimensions. Bearing in mind that all the walls require the same texturing, what would be the best way to 'wrap' the texture around the walls? Or should I be constructing my walls differently in the first place?
I have also given both Three.MultiMaterial(materials) and THREE.MeshFaceMaterial(materials) a go (giving a unique material to each wall), but these both seriously kill performance.
Thanks in advance,
Happy Days,
J
Screenshot: Maze Image
Run a loop over each faceVertexUV of each wall's geo. Multiply each UV.x by the width of the wall (eliminate stretching / shrinking). Add an offset to the UV.x of the wall that corresponds to a summation of the width of walls thus far:
var len = wall.geometry.faceVertexUvs[0].length;
for (var i = 0; i < len; i++){
for (var j = 0; j < 3; j++){
wall.geometry.faceVertexUvs[0][i][j].x *= wall.w;
wall.geometry.faceVertexUvs[0][i][j].x += this.w;
}
}
this.w += wall.w;

threejs selecting different parts of a mesh

I'm using THREE.js. I have a model of a human that I want to be able to select different portions of. For example, if you click on one of the legs a particular action will be executed. My original idea was to split the model up into separate meshes and then use raytracing to determine which object was selected. But now when i render the scene, the shading along the edges of each mesh doesn't blend with adjoining meshes. This leaves ragged looking lines across the model between selectable portions. Is there a way to blend the shading between the mesh pieces I've created? Or is there a better way to select part of a mesh other than creating separate meshes? I have some programming experience, but this is the first time I've tried to use three.js. Any insight would be greatly appreciated.
You may create additional attribute for each triangle, that would be color of the bodypart that it belongs to. So, all triangles of the left leg would be red, all triangles of right leg would be blue etc.
Render your model normally, and add second pass where you would render triangles colored in the way described above, so no shading at all. Then, you could get your mouse position where the user clicked and look up in that bodypart-colored framebuffer and just check the pixel color on the place where user clicked.
This technique of picking 3d objects by assigning them different colors, rendering those colors to another texture and then checking color of clicked pixel is quite common, although it has some flaws. On the other hand, neither is ray testing absolutely accurate.
I believe that this demo runs actually based on that concept - demo.
var aiGeojj = new t.CubeGeometry(30, 30, 30);
var uprighters = Math.floor((Math.random() * 11));
var aiMaterialjj = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprighters + '.gif'), opacity: 0, transparent: true });
var ojj= new t.Mesh(aiGeojj, aiMaterialjj);
ojj.limbs = [];
ojj.trunk = [];
var aiGeojjkey2c = new t.CubeGeometry(50, 50, 50);
var uprightersc = Math.floor((Math.random() * 11));
var aiMaterialjjc = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprightersc + '.gif'), opacity: 1, transparent: true });
var ojjkey2c = new t.Mesh(aiGeojjkey2c, aiMaterialjjc);
ojjkey2c.id = "hiworld";
ojj.add(ojjkey2c);
ojj.trunk.push(ojjkey2c);
for( var you = 0; you < ojj.length; you++){
for( var youb = 0; youb < ojj[you].trunk.length; youb++){
window.alert( ojj[you].trunk[youb].id);
}
}

Rendering a large number of colored particles using three.js and the canvas renderer

I am trying to use the Three.js library to display a large number of colored points on the screen (about half a million to million for example). I am trying to use the Canvas renderer rather than the WebGL renderer if possible (The web pages would also be displayed in the Google Earth Client bubbles, which seems to work with Canvas renderer but not the WebGL renderer.)
While I have the problem solved for a small number of points (tens of thousands) by modifying the code from here, I am having trouble scaling it beyond that.
But in the the following code using WebGL and the Particle System I can render half a million random points, but without colors.
...
var particles = new THREE.Geometry();
var pMaterial = new THREE.ParticleBasicMaterial({
color: 0xFFFFFF,
size: 1,
sizeAttenuation : false
});
// now create the individual particles
for (var p = 0; p < particleCount; p++) {
// create a particle with randon position values,
// -250 -> 250
var pX = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pY = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pZ = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
particle = new THREE.Vertex(
new THREE.Vector3(pX, pY, pZ)
);
// add it to the geometry
particles.vertices.push(particle);
}
var particleSystem = new THREE.ParticleSystem(
particles, pMaterial);
scene.add(particleSystem);
...
Is the reason for the better performance of the above code due to the Particle System? From what I have read in the documentation it seems the Particle System can only be used by the WebGL renderer.
So my question(s) are
a) Can I render such large number of particles using the Canvas renderer or is it always going to be slower than the WebGL/ParticleSystem version? If so, how do I go about doing that? What objects and or tricks do I use to improve performance?
b) Is there a compromise I can reach if I give up some features? In other words, can I still use the Canvas renderer for the large dataset if I give up the need to color the individual points?
c) If I have to give up the Canvas and use the WebGL version, is it possible to change the colors of the individual points? It seems the color is set by the material passed to the ParticleSystem and that sets the color for all the points.
EDIT: ParticleSystem and PointCloud has been renamed to Points. In addition, ParticleBasicMaterial and PointCloudMaterial has been renamed to PointsMaterial.
This answer only applies to versions of three.js prior to r.125.
To have a different color for each particle, you need to have a color array as a property of the geometry, and then set vertexColors to THREE.VertexColors in the material, like so:
// vertex colors
var colors = [];
for( var i = 0; i < geometry.vertices.length; i++ ) {
// random color
colors[i] = new THREE.Color();
colors[i].setHSL( Math.random(), 1.0, 0.5 );
}
geometry.colors = colors;
// material
material = new THREE.PointsMaterial( {
size: 10,
transparent: true,
opacity: 0.7,
vertexColors: THREE.VertexColors
} );
// point cloud
pointCloud = new THREE.Points( geometry, material );
Your other questions are a little too general for me to answer, and besides, it depends on exactly what you are trying to do and what your requirements are. Yes, you can expect Canvas to be slower.
EDIT: Updated for three.js r.124

Resources