Three.js merging mesh/geometry objects - performance

I'm creating a three.js app which consists of floor (which is composed of different tiles) and shelving units (more than 5000...). I'm having some performance issues and low FPS (lower then 20), and I think it is because I'm creating a separate mesh for every tile and shelving unit. I know that I can leverage geometry/mesh merging in order to improve performance. This is the code for rendering the floor and shelving units (cells):
// add ground tiles
const tileGeometry = new THREE.PlaneBufferGeometry(
1,
1,
1
);
const edgeGeometry = new THREE.EdgesGeometry(tileGeometry);
const edges = new THREE.LineSegments(edgeGeometry, edgeMaterial);
let initialMesh = new THREE.Mesh(tileGeometry, floorMat);
Object.keys(groundTiles).forEach((key, index) => {
let tile = groundTiles[key];
let tileMesh = initialMesh.clone();
tileMesh.position.set(
tile.leftPoint[0] + tile.size[0] / 2,
tile.leftPoint[1] + tile.size[1] / 2,
0
);
tileMesh.scale.x = tile.size[0];
tileMesh.scale.y = tile.size[1];
tileMesh.name = `${tile.leftPoint[0]}-${tile.leftPoint[1]}`;
// Add tile edges (adds tile border lines)
tileMesh.add(edges.clone());
scene.add(tileMesh);
});
// add shelving units
const cellGeometry = new THREE.BoxBufferGeometry( 790, 790, 250 );
const wireframe = new THREE.WireframeGeometry( cellGeometry );
const cellLine = new THREE.LineSegments(wireframe, shelves_material);
Object.keys(cells).forEach((key, index) => {
let cell = cells[key];
const cellMesh = cellLine.clone();
cellMesh.position.set(
cell["x"] + 790 / 2,
// cell["x"],
cell["y"] + 490 / 2,
cell["z"] - 250
);
scene.add(cellMesh);
});
Also, here is a link to a screenshot from the final result.
I saw this article regarding merging of geometries, but I don't know how to implement it in my case because of the edges, line segments and wireframe objects I'm using..
Any help would be appriciated

Taking into account #Mugen87's comment, here's a possible approach :
Pretty straightforward merging of planes
Using a shader material to draw "borders"
Note : comment out the discard; line to fill the cards with red or whatever material you might want.
JsFiddle demo

Related

three js LoadObject pivot [duplicate]

What I'm trying to achieve is a rotation of the geometry around pivot point and make that the new definition of the geometry. I do not want te keep editing the rotationZ but I want to have the current rotationZ to be the new rotationZ 0.
This way when I create a new rotation task, it will start from the new given pivot point and the newly given rad.
What I've tried, but then the rotation point moves:
// Add cube to do calculations
var box = new THREE.Box3().setFromObject( o );
var size = box.getSize();
var offsetZ = size.z / 2;
o.geometry.translate(0, -offsetZ, 0)
// Do ratation
o.rotateZ(CalcUtils.degreeToRad(degree));
o.geometry.translate(0, offsetZ, 0)
I also tried to add a Group and rotate that group and then remove the group. But I need to keep the rotation without all the extra objects. The code I created
var box = new THREE.Box3().setFromObject( o );
var size = box.size();
var geometry = new THREE.BoxGeometry( 20, 20, 20 );
var material = new THREE.MeshBasicMaterial( { color: 0xcc0000 } );
var cube = new THREE.Mesh( geometry, material );
cube.position.x = o.position.x;
cube.position.y = 0; // Height / 2
cube.position.z = -size.z / 2;
o.position.x = 0;
o.position.y = 0;
o.position.z = size.z / 2;
cube.add(o);
scene.add(cube);
// Do ratation
cube.rotateY(CalcUtils.degreeToRad(degree));
// Remove cube, and go back to single object
var position = o.getWorldPosition();
scene.add(o)
scene.remove(cube);
console.log(o);
o.position.x = position.x;
o.position.y = position.y;
o.position.z = position.z;
So my question, how do I save the current rotation as the new 0 rotation point. Make the rotation final
EDIT
I added an image of what I want to do. The object is green. I have a 0 point of the world (black). I have a 0 point of the object (red). And I have rotation point (blue).
How can I rotate the object around the blue point?
I wouldn't recommend updating the vertices, because you'll run into trouble with the normals (unless you keep them up-to-date, too). Basically, it's a lot of hassle to perform an action for which the transformation matrices were intended.
You came pretty close by translating, rotating, and un-translating, so you were on the right track. There are some built-in methods which can help make this super easy.
// obj - your object (THREE.Object3D or derived)
// point - the point of rotation (THREE.Vector3)
// axis - the axis of rotation (normalized THREE.Vector3)
// theta - radian value of rotation
// pointIsWorld - boolean indicating the point is in world coordinates (default = false)
function rotateAboutPoint(obj, point, axis, theta, pointIsWorld){
pointIsWorld = (pointIsWorld === undefined)? false : pointIsWorld;
if(pointIsWorld){
obj.parent.localToWorld(obj.position); // compensate for world coordinate
}
obj.position.sub(point); // remove the offset
obj.position.applyAxisAngle(axis, theta); // rotate the POSITION
obj.position.add(point); // re-add the offset
if(pointIsWorld){
obj.parent.worldToLocal(obj.position); // undo world coordinates compensation
}
obj.rotateOnAxis(axis, theta); // rotate the OBJECT
}
After this method completes, the rotation/position IS persisted. The next time you call the method, it will transform the object from its current state to wherever your inputs define next.
Also note the compensation for using world coordinates. This allows you to use a point in either world coordinates or local space by converting the object's position vector into the correct coordinate system. It's probably best to use it this way any time your point and object are in different coordinate systems, though your observations may differ.
As a simple solution for anyone trying to quickly change the pivot point of an object, I would recommend creating a group and adding the mesh to the group, and rotating around that.
Full example
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({ color: 0xff0000 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube)
Right now, this will just rotate around its center
cube.rotation.z = Math.PI / 4
Create a new group and add the cube
const group = new THREE.Group();
group.add(cube)
scene.add(group)
At this point we are back where we started. Now move the mesh:
cube.position.set(0.5,0.5,0)
Then move the group
group.position.set(-0.5, -0.5, 0)
Now use your group to rotate the object:
group.rotation.z = Math.PI / 4

Three.js memory allocation & workflow question

Let’s say I want to make 100 objects - for example cars, like the one you see here:
This car is currently comprised of 5 meshes: one yellow Cube and four blue Spheres
What I’d like to know is what would be the most efficient/correct way to make 100 of these cars - or maybe 500 - in terms of memory management/ CPU performance, etc.
The way I’m currently going about doing this is as follows:
Make an empty THREE.Group called “newCarGroup” -
Create the yellow rectangular Mesh for the body of the car - called “carBodyMesh”
Create four blue Sphere Meshes for the Tires called “tire1Mesh”, “tire2Mesh”, “tire3Mesh”, and “tire4Mesh”
Add the Body and the four Tires to the “newCarGroup”
And finally, in a FOR loop, create/instantiate 100 “newCarGroup” objects, adding each one to the SCENE at a random position
The code is below.
It's working perfectly well right now, but I’d like to know if this is the “proper”/best way to do this?
Consider it’s possible I might end up needing 1,000 cars - or 5,000 cars. So will this scale properly?
Also, I need to add more objects to the car: like 4 windows - actually make that 6 windows, to also include the front and back windshields, then four headlights, etc.
So the final Car Object alone may end up being comprised of 20 meshes - or more.
Being that I’m kinda new to THREE.JS I wanna make sure I develop good habits and go about this sort of thing the right way.
Here’s my code:
function makeOneCar() {
var newCarGroup = new THREE.Group();
// 1. CAR-Body:
const bodyGeometry = new THREE.BoxGeometry(30, 10, 10);
const bodyMaterial = new THREE.MeshPhongMaterial({ color: "yellow" } );
const carBodyMesh = new THREE.Mesh(bodyGeometry, bodyMaterial);
// 2. TIRES:
const tireGeometry = new THREE.SphereGeometry(2, 16, 16);;
const tireMaterial = new THREE.MeshPhongMaterial( { color: "blue" } );
const tire1Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire2Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire3Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire4Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
// TIRE 1 Position:
tire1Mesh.position.x = carBodyMesh.position.x - 11;
tire1Mesh.position.y = carBodyMesh.position.y - 4.15;
tire1Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 2 Position:
tire2Mesh.position.x = carBodyMesh.position.x + 11;
tire2Mesh.position.y = carBodyMesh.position.y - 4.15;
tire2Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 3 Position:
tire3Mesh.position.x = carBodyMesh.position.x - 11;
tire3Mesh.position.y = carBodyMesh.position.y - 4.15;
tire3Mesh.position.z = carBodyMesh.position.z - 4.5;
// TIRE 4 Position:
tire4Mesh.position.x = carBodyMesh.position.x + 11;
tire4Mesh.position.y = carBodyMesh.position.y - 4.15;
tire4Mesh.position.z = carBodyMesh.position.z - 4.5;
// Putting it all together:
newCarGroup.add(carBodyMesh);
newCarGroup.add(tire1Mesh);
newCarGroup.add(tire2Mesh);
newCarGroup.add(tire3Mesh);
newCarGroup.add(tire4Mesh);
// Setting (x, y, z) Coordinates - RANDOMLY
let randy = Math.floor(Math.random() * 10);
let newCarGroupX = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
let newCarGroupY = 0.0;
let newCarGroupZ = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
newCarGroup.position.set(newCarGroupX, newCarGroupY, newCarGroupZ)
scene.add(newCarGroup);
}
function makeCars() {
for(var carCount = 0; carCount < 100; carCount ++) {
makeOneCar();
}
}
I’d like to know if this is the “proper”/best way to do this?
This is subjective. You say the method works great for your current use-case, so for that use-case, it is fine.
So will this scale properly?
The simple answer is: No. The more complex answer is: ...not really.
You're re-using the geometry and materials, which is good. But every Mesh you create has meta information surrounding it, which adds to your overall memory footprint.
Also, every standard Mesh you add incurs what is known as a "draw call", which is the GPU drawing that particular shape. Instead, take a look at InstancedMesh. This allows the GPU to be given instructions on how to draw the shape throughout the scene once. Yes, rather than drawing each cube individually, the GPU can draw all the cubes at the same time, and they can even have different colors and transformations. There are limitations to this class, but it's a good starting point to understanding how instancing works.

Three js RayCast between two object in scene

I know how raycast object in scene when click on mouse, but now i need to know if two object in scene can raycast each other.
This is, i load a 3D Object in scene for example Two Rooms in OBJ object, then i add three mesh box in some points, for example two point on first room and one point on second room.
Then two points on first room can raycast each other(have direct vision), but two point for first room can't raycast point on second room.(they don't have vision through room wall).
I attached code used for load scene and points, any sugestion hwo to do?
//LOAD MAIN 3D OBJECT
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.setPath('./asset/3d/');
objLoader.load("model01.obj", function(object){
var mesh = object.children[0];
mesh.castShadow = true;
mesh.receiveShadow = true;
mesh.rotation.x = Math.PI / 2;
var box = new THREE.Box3().setFromObject( object )
var ox = -(box.max.x + box.min.x) / 2;
var oy = -(box.max.y + box.min.y) / 2;
var oz = -(box.max.z + box.min.z) / 2;
mesh.position.set(ox, oy, oz);
_scene.add(mesh);
render();
setTimeout(render, 1000);
}
//LOAD count_points inside scene
for(var i=0;i<cta_points;i++){
var c_r = 2;
var c_geometry = new THREE.BoxBufferGeometry( c_r, c_r, c_r );
var c_material = new THREE.MeshLambertMaterial( { color: new THREE.Color("rgb("+(40 + 30)+", 0, 0)"),opacity: 0.0,
transparent: true} );
var c_mesh = new THREE.Mesh( c_geometry, c_material );
var position = get_positions(i);
c_mesh.position.copy(position);
c_mesh.name="BOX";
scene.add( c_mesh );
}
Possibly take a look at:
How to detect collision in three.js?
Usually, to solve this problem, you would make a collision mask with a collision group.
The collision group is added per object, and is represented by a "bit" in a bitmask,
The wall could be in a separate collision group, like 4 (binary 100)
and the objects could be in another group, say 2 (binary 10)
Then you just need to check collisions of the object against the mask.
(check if the collision group matches against a bitmask (the masks above could be 10, 100,) to check for collisions).
So that way, you can call THREE.Raycaster().intersectObjects(args), where the arguments are the ones that pass the bitmask test ( mask == object.collision_group ).
That way, you won't need to include the wall for collision detection testing, since it is using a separate bitmask.

Three js: How to normalize a mesh generated by vertices

I'm somewhat new to Three js, and my linear algebra days were back in the 90s so I don't recall much about quarternions. My issue is I have 8 vertices for a cube that I can use to create a custom geometry mesh from, but it doesn't set the position / rotation / scale info for its world matrix. Therefor it can not be used cleanly by other three js modules like controls. I can look up the math and calculate what position / scale / rotation (rotation gets a bit hairy with some fun acos stuff) should be and create a standard boxgeometry from that. But it seems like there should be some way to do it via three js objects if I can generate the proper matrix to apply to it. The quarternion setFromUnitVectors looked interesting, but I'd still have to do some work to generate the vectors. Any ideas would be appreciated thanks
Edit: :) So let me try and simplify. I have 8 vertices, I want to create a box geometry. But box geometry doesn't take vertices. It takes width, height, depth (relatively easy to calculate) and then you set the position/scale/rotation. So here's my code thus far:
5____4
1/___0/|
| 6__|_7
2/___3/
const box = new Box3();
box.setFromPoints(points);
const width = points[1].distanceTo(points[0]);
const height = points[3].distanceTo(points[0]);
const depth = points[4].distanceTo(points[0]);
const geometry = new BoxGeometry(width, height, depth);
mesh = new Mesh(geometry, material);
const center = box.getCenter(new Vector3());
const normalizedCorner = points[0].clone().sub(center);
const quarterian = new Quaternion();
quarterian.setFromUnitVectors(geometry.vertices[0], normalizedCorner);
mesh.setRotationFromQuaternion(quarterian);
mesh.position.copy(center);
The problem being my rotation element is wrong (besides my vectors not being unit vectors). I'm apparently not getting the correct quarternion to rotate my mesh correctly.
Edit: From WestLangley's suggestion, I'm creating a rotation matrix. However, while it rotates in the correct plane, the angle is off. Here's what I have added:
const matrix = new Matrix4();
const widthVector = new Vector3().subVectors(points[6], points[7]).normalize();
const heightVector = new Vector3().subVectors(points[6], points[5]).normalize();
const depthVector = new Vector3().subVectors(points[6], points[2]).normalize();
matrix.set(
widthVector.x, heightVector.x, depthVector.x, 0,
widthVector.y, heightVector.y, depthVector.y, 0,
widthVector.z, heightVector.z, depthVector.z, 0,
0, 0, 0, 1,
);
mesh.quaternion.setFromRotationMatrix(matrix);
Per WestLangley's comments I wasn't creating my matrix correctly. The correct matrix looks like:
const matrix = new Matrix4();
const widthVector = new Vector3().subVectors(points[7], points[6]).normalize();
const heightVector = new Vector3().subVectors(points[5], points[6]).normalize();
const depthVector = new Vector3().subVectors(points[2], points[6]).normalize();
matrix.set(
widthVector.x, heightVector.x, depthVector.x, 0,
widthVector.y, heightVector.y, depthVector.y, 0,
widthVector.z, heightVector.z, depthVector.z, 0,
0, 0, 0, 1,
);
mesh.quaternion.setFromRotationMatrix(matrix);

Rendering a large number of colored particles using three.js and the canvas renderer

I am trying to use the Three.js library to display a large number of colored points on the screen (about half a million to million for example). I am trying to use the Canvas renderer rather than the WebGL renderer if possible (The web pages would also be displayed in the Google Earth Client bubbles, which seems to work with Canvas renderer but not the WebGL renderer.)
While I have the problem solved for a small number of points (tens of thousands) by modifying the code from here, I am having trouble scaling it beyond that.
But in the the following code using WebGL and the Particle System I can render half a million random points, but without colors.
...
var particles = new THREE.Geometry();
var pMaterial = new THREE.ParticleBasicMaterial({
color: 0xFFFFFF,
size: 1,
sizeAttenuation : false
});
// now create the individual particles
for (var p = 0; p < particleCount; p++) {
// create a particle with randon position values,
// -250 -> 250
var pX = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pY = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pZ = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
particle = new THREE.Vertex(
new THREE.Vector3(pX, pY, pZ)
);
// add it to the geometry
particles.vertices.push(particle);
}
var particleSystem = new THREE.ParticleSystem(
particles, pMaterial);
scene.add(particleSystem);
...
Is the reason for the better performance of the above code due to the Particle System? From what I have read in the documentation it seems the Particle System can only be used by the WebGL renderer.
So my question(s) are
a) Can I render such large number of particles using the Canvas renderer or is it always going to be slower than the WebGL/ParticleSystem version? If so, how do I go about doing that? What objects and or tricks do I use to improve performance?
b) Is there a compromise I can reach if I give up some features? In other words, can I still use the Canvas renderer for the large dataset if I give up the need to color the individual points?
c) If I have to give up the Canvas and use the WebGL version, is it possible to change the colors of the individual points? It seems the color is set by the material passed to the ParticleSystem and that sets the color for all the points.
EDIT: ParticleSystem and PointCloud has been renamed to Points. In addition, ParticleBasicMaterial and PointCloudMaterial has been renamed to PointsMaterial.
This answer only applies to versions of three.js prior to r.125.
To have a different color for each particle, you need to have a color array as a property of the geometry, and then set vertexColors to THREE.VertexColors in the material, like so:
// vertex colors
var colors = [];
for( var i = 0; i < geometry.vertices.length; i++ ) {
// random color
colors[i] = new THREE.Color();
colors[i].setHSL( Math.random(), 1.0, 0.5 );
}
geometry.colors = colors;
// material
material = new THREE.PointsMaterial( {
size: 10,
transparent: true,
opacity: 0.7,
vertexColors: THREE.VertexColors
} );
// point cloud
pointCloud = new THREE.Points( geometry, material );
Your other questions are a little too general for me to answer, and besides, it depends on exactly what you are trying to do and what your requirements are. Yes, you can expect Canvas to be slower.
EDIT: Updated for three.js r.124

Resources