There is a three.js scene with some 3D objects and 200 - 300 small text labels (< 10% are visible to the camera at one perspective). Adding the text sprites decreased the FPS from 60 to 30 - 40 and its also very memory consuming.
Is there a way to make the sprites faster?
I read about caching material, but labels are all unique - so this isn't possible.
Test: https://jsfiddle.net/h9sub275/4/
(You can change SPRITE_COUNT to see an FPS drop on your machine)
Edit 1: Setting the canvas size to the text bounds will decrease memory consumption, but not improve the FPS.
var Test = {
SPRITE_COUNT : 700,
init : function() {
this.renderer = new THREE.WebGLRenderer({antialias : true}); // false, a bit faster without antialias
this.renderer.setPixelRatio(window.devicePixelRatio);
this.renderer.setSize(window.innerWidth, window.innerHeight);
this.container = document.getElementById('display');
this.container.appendChild(this.renderer.domElement);
this.camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 1000);
this.scene = new THREE.Scene();
this.group = new THREE.Object3D();
this.scene.add(this.group);
for (var i = 0; i < this.SPRITE_COUNT; i++) {
var sprite = this.makeTextSprite('label ' + i, 24);
sprite.position.set(Math.random() * 20 - 10, Math.random() * 20 - 10, Math.random() * 20 - 10);
this.group.add(sprite);
}
this.stats = new Stats();
this.stats.domElement.style.position = 'absolute';
this.stats.domElement.style.left = '0px';
this.stats.domElement.style.top = '0px';
document.body.appendChild(this.stats.domElement);
this.render();
},
render : function() {
var self = this;
this.camera.rotation.x += 0.002;
this.renderer.render(this.scene, this.camera);
this.stats.update();
requestAnimationFrame(function() {self.render();});
},
makeTextSprite : function(message, fontsize) {
var ctx, texture, sprite, spriteMaterial,
canvas = document.createElement('canvas');
ctx = canvas.getContext('2d');
ctx.font = fontsize + "px Arial";
// setting canvas width/height before ctx draw, else canvas is empty
canvas.width = ctx.measureText(message).width;
canvas.height = fontsize * 2; // fontsize * 1.5
// after setting the canvas width/height we have to re-set font to apply!?! looks like ctx reset
ctx.font = fontsize + "px Arial";
ctx.fillStyle = "rgba(255,0,0,1)";
ctx.fillText(message, 0, fontsize);
texture = new THREE.Texture(canvas);
texture.minFilter = THREE.LinearFilter; // NearestFilter;
texture.needsUpdate = true;
spriteMaterial = new THREE.SpriteMaterial({map : texture});
sprite = new THREE.Sprite(spriteMaterial);
return sprite;
}
};
window.onload = function() {Test.init();};
You are correct, three.js is indeed causing the GPU to use a lot of texture memory this way. Keep in mind that every texture sent to the GPU must be as high as it is wide, so you'll be wasting a lot of memory by making a single canvas for each sprite.
A challenge here is the three.js design choice to have a single set of UV coordinates on a Texture; even when you combine the label sprites in a single texture map and you .clone() your texture for each material without some extra effort, it'll still send each Texture to the GPU without sharing the memory. In short, it currently has no documented way of telling it these textures are the same, and you can't point each Material at the same Texture as it's not the Material which keeps the UVs. https://github.com/mrdoob/three.js/issues/5821 discusses that problem.
I've worked around these issues by combining my sprites in one or more texture maps. For this I created a "sprite texture atlas manager" which manages the allocation of sprite textures as I need them, and I ported a knapsack algorithm to JS which helps me to (mostly) fill up these texture maps with my labels so not a lot of memory is wasted.
I've extracted my code for this out in a separate library, and it is available here: https://github.com/Leeft/three-sprite-texture-atlas-manager with a live example (which does not yet use sprites, but that should be easy to add) at http://jsfiddle.net/Shiari/sbda72k9/.
Fortunately, while this is not documented yet, I also found that it's quite easy now in recent versions (r73 at least, perhaps r72 as well) to force the textures to share the GPU memory by making sure they all have the same .uuid value. My library makes sure to make use of this, and in my testing so far (with 2048x2048 sprite maps; I only need two of those at the size that I'm rendering) that this brings GPU memory down from ~2.6GB when not shared to ~300-600MB when shared. (2048px is far too large when you're only placing a single label there though, and reducing the texture size helps greatly when the maps are not shared).
Lastly, as per your own answer, drawcalls and culling are also a performance issue in r73. I never hit that problem though, since I was already batching my draw calls by grouping everything.
This was a bug in the three.js WebGLRenderer (missing view frustum check for sprites in <= r73). It is already fixed in the dev branch. So you can expect it to be available in r74.
Issue and Details https://github.com/mrdoob/three.js/issues/7371
Fixed version with latest dev build: https://jsfiddle.net/h9sub275/9/
Performance Test with r73: https://jsfiddle.net/h9sub275/7/
(Click to see the performance difference between manually removing invisible sprites and not removing them)
Latest Dev Build:
<script src="https://rawgit.com/mrdoob/three.js/dev/build/three.js"> </script>
Related
I set up a simple glb viewer with three.js. The model casts and accepts shadows. The problem is that dark boxes appear once I set a spotLight. I'm not sure what the problem is.
I uploaded the project here: https://github.com/maxibenner/threejsviewer
Configuring proper shadows can sometimes be difficult.
var spotLight = new THREE.SpotLight(0xffa95c, 2)
spotLight.castShadow = true
spotLight.position.set(2,2,-2)
spotLight.angle = Math.PI * 0.1;
spotLight.shadow.camera.near = 1;
spotLight.shadow.camera.far = 4;
spotLight.shadow.bias = - 0.002;
spotLight.shadow.mapSize.set( 1024, 1024 );
The idea is to move the spot light closer to the model and tighten the shadow camera's frustum as good as possible. The bias configuration is necessary to avoid self-shadowing artifacts. A bigger shadow map size (default is 512x512) sharpens the shadows.
As mentioned in the comment, adding an instance of CameraHelper to your scene is very helpful when optimizing shadows:
scene.add( new THREE.CameraHelper( spotLight.shadow.camera ) );
I load .obj model in Three.js and then create independent meshes from its faces for really interesting animation. But the problem is a very bad performance with so much meshes.
In fact, single mesh with 10000 faces works beautifully. But separated 10000 meshes (created from these faces) work badly - even without animation, just static scene.
How can i optimize performance with saving such animation?
Link: http://intelligence-group.ru/test.html
Here is the code creating meshes:
` obj_loader.load(
'/assets/models/zeus.obj',
function(object) {
var material = new THREE.MeshPhongMaterial( {
color: "#eeeeee",
shading: THREE.FlatShading,
metalness: 0,
roughness: 0.5,
refractionRatio: 0.25
} );
var face = new THREE.Face3( 0, 1, 2 );
for (var i = 0; i < object.children.length; i++) {
var child = object.children[i];
var geometry = new THREE.Geometry().fromBufferGeometry(child.geometry);
for (var i = 0; i < geometry.faces.length; i++) {
var new_geometry = new THREE.Geometry();
var a = geometry.faces[i].a;
var b = geometry.faces[i].b;
var c = geometry.faces[i].c;
new_geometry.vertices.push(geometry.vertices[a]);
new_geometry.vertices.push(geometry.vertices[b]);
new_geometry.vertices.push(geometry.vertices[c]);
new_geometry.faces.push( face );
new_geometry.computeFaceNormals();
var mesh = new THREE.Mesh( new_geometry, material );
group.add( mesh );
}
full_orig_array(group); //animation function - not the reason of bad optimization!
}
scene.add(group);
}
);`
Important: after completion of animation i substitute 10 000 meshes with one single mesh (original object from loader) - and then you can see big improvement of performance. It's not about animation - i checked it: even without animation 10 000 meshes have the same bad performance.
As i understand, it's about different geometries in each mesh. But i don't know how to solve this problem(
Please take into account that i don't duplicate geometry - each mesh's geometry is unique. That is the problem!
There are already a number of answers here on stackoverflow about the performance cost of drawcalls and state-changes so I won't go into that. You NEED to get the number of drawcalls down to render efficiently. How to do that is completely up to your exact problem and your creativity.
My suggestion would be to use a single BufferGeometry: You could just animate all vertex-positions within a single buffer-geometry. You would need to keep the state (translation, rotation, etc) outside of the geometry, but you can write code that freely transforms all of your triangles as if they were single objects.
You get overhead from many drawcalls and webgl state change. Rendering as one mesh is a single draw call vs 10.000.
You can use three's InstancedBufferGeometry to merge these into one call, without duplicating the geometry (thus saving both memory and overhead).
This class unfortunately does not work with default materials, shadows etc. It's a fairly low level struct.
I wrote a further abstraction of this that should work on the same level as THREE.Mesh and work with shadows, AO, depth etc.
https://www.npmjs.com/package/three-instanced-mesh
I've run into an issue after switching to a logarithmic depth buffer in Three.js. Everything runs nicely except for nearby culling of the ground as described in the following photos:
As you can see, the camera is elevated above the ground significantly. The character box that is shown is about 2 units above the ground, and my camera is set up as such:
var WIDTH = window.innerWidth
, HEIGHT = window.innerHeight;
var VIEW_ANGLE = 70
, ASPECT = WIDTH / HEIGHT
, NEAR = 1e-6
, FAR = 9000;
var aspect = WIDTH / HEIGHT;
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.rotation.order = 'YXZ';
So my NEAR parameter is nowhere near 2, the distance between the camera and the ground. You can see in the second image that I even move up the camera with my PointerLockControls and still run into the issue.
Can anyone diagnose my issue?
I also tested my issue by seeing if this bug occurred with a static camera as well. It does.
Additionally, this problem only happens with the logarithmic depth buffer, as it doesn't happen with the default depth buffer.
I have my camera as a child to a controls object, which is defined as follows:
controls = new THREE.PointerLockControls(camera);
controls.getObject().position.set(strtx, 50, strtz);
scene.add(controls.getObject());
camera.position.z += 2;
camera.position.y += .1;
Here's the relevant code for PointerLockControls:
var pitchObject, yawObject;
var v = new THREE.Vector3(0, 0, -1);
THREE.PointerLockControls = function(camera){
var scope = this;
camera.rotation.set(0, 0, 0);
pitchObject = new THREE.Object3D();
pitchObject.rotation.x -= 0.3;
pitchObject.add(camera);
yawObject = new THREE.Object3D();
yawObject.position.y = 10;
yawObject.add(pitchObject);
var PI_2 = Math.PI / 2;
var onMouseMove = function(event){
if (scope.enabled === false) return;
var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0;
var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0;
yawObject.rotation.y -= movementX * 0.002;
pitchObject.rotation.x -= movementY * 0.002;
pitchObject.rotation.x = Math.max( - PI_2, Math.min( PI_2, pitchObject.rotation.x ) );
};
this.dispose = function() {
document.removeEventListener( 'mousemove', onMouseMove, false );
};
document.addEventListener( 'mousemove', onMouseMove, false );
this.enabled = false;
this.getObject = function () {
return yawObject;
};
this.getDirection = function() {
// assumes the camera itself is not rotated
var rotation = new THREE.Euler(0, 0, 0, "YXZ");
var direction = new THREE.Vector3(0, 0, -1);
return function() {
rotation.set(pitchObject.rotation.x, yawObject.rotation.y, 0);
v.copy(direction).applyEuler(rotation);
return v;
};
}();
};
You'll also notice that it's only the ground that is being culled, not other objects
Edit:
I've whipped up an isolated environment that shows the larger issue. In the first image, I have a flat PlaneBufferGeometry that has 400 segments for both width and height, defined by var g = new THREE.PlaneBufferGeometry(380, 380, 400, 400);. Even getting very close to the surface, no clipping is present:
However, if I provide only 1 segment, var g = new THREE.PlaneBufferGeometry(380, 380, 1, 1);, the clipping is present
I'm not sure if this intended in Three.js/WebGL, but it seems that I'll need to do something to work around it.
I don't think this is a bug, I think this is a feature of how the depthbuffer in the different settings works. Look at this example. On the right, the depthbuffer can't make up its mind between the letters in "microscopic" and the sphere. This is because it has lower precision at very small scales and starts doing rounding that oscilates between one object and another, and favoring draw order over z-depth.
It's always a tradeoff. If you want to forgo this issue, you can try raising the scale of your scene overall, so that the 'near' of the camera will never be so close to something that it can round it off - so just work in a number range that won't be rounded in the exponential model of the logarithmic z-buffer.
Also another question - how is the blue defined, because maybe what you're seeing is not clipping from being too close, but confusion between whether blue or the ground is closer. If it's just a blue box encompassing everything, you could try making it bigger and more distant from the ground.
EDIT:
Okay, this looks like it should work. so I would start looking for edge cases. What can you do to change the scene so that it does work? What can you do to make other things start breaking?
try moving the landscape far down/ far up (does the issue persist when looking up instead of down at it, does it persist even when it's unquestionably far away?)
try rotating the landscape
try changing the camera FOV
try changing the camera far plane
try changing the camera near plane from 1e-x notation to .000001, .0001,.01,.1, etc. see what effect it has.
console.log the camera object in your render function, and make sure that the fov, near, far etc, is as you set on setup and that it's not being overwritten and reset to default. check what it prints out in chrome's developer tools, you can browse the whole object, check position, parent name, all that stuff.
basically i don't see a blatant mistake, so I would guess it's something hard to spot, or it's working exactly as it should. Figure out what you can do to improve the effect/ make it worse, and that will clarify a direction to go.
A good rule of thumb for debugging is to try and just take things to an extreme, without trying to fix it, or keep the code true to its purpose, and just see in what way it breaks further/changes. report back when you find something.
I have seams between horizontal faces of the cube when use texture atlas in three.js.
This is demo: http://jsfiddle.net/rnix/gtxcj3qh/7/ or http://jsfiddle.net/gtxcj3qh/8/ (from comments)
Screenshot of the problem:
Here I use repeat and offset:
var materials = [];
var t = [];
var imgData = document.getElementById("texture_atlas").src;
for ( var i = 0; i < 6; i ++ ) {
t[i] = THREE.ImageUtils.loadTexture( imgData ); //2048x256
t[i].repeat.x = 1 / 8;
t[i].offset.x = i / 8;
//t[i].magFilter = THREE.NearestFilter;
t[i].minFilter = THREE.NearestFilter;
t[i].generateMipmaps = false;
materials.push( new THREE.MeshBasicMaterial( { map: t[i], overdraw: 0.5 } ) );
}
var skyBox = new THREE.Mesh( new THREE.CubeGeometry( 1024, 1024, 1024), new THREE.MeshFaceMaterial(materials) );
skyBox.applyMatrix( new THREE.Matrix4().makeScale( 1, 1, -1 ) );
scene.add( skyBox );
The atlas has size 2048x256 (power of two). I also tried manual UV-mapping instead of repeat, but the result is the same. I use 8 tiles instead of 6 because I have thought precision of division 1/6 causes the problem, but not.
Pixels on this line are from next tile in atlas. I tried completly white atlas and there was not any artefacts. This explains why there are not seams on vertical borders of Z-faces. I have played with filters, wrapT, wrapS and mipmaps but it does not help. Increasing resolution does not help. There is 8192x1024 atlas http://s.getid.org/jsfiddle/atlas.png I tried another atlas, the result is the same.
I know that I can split atlas into separate files and it works perfectly but it is not convenient.
Whats wrong?
I think the issue is the filtering problem with texture sheets. On image borders in a texture sheet, the gpu may pick the texel from either the correct image or the neighbor image due to limited precision. Because the colors are usually very different, this results in the visible seams. In regular textures, this is solved with CLAMP_TO_EDGE.
If you must use texture alias, then you need to fake CLAMP_TO_EDGE behavior by padding the image borders. See this answer https://gamedev.stackexchange.com/questions/61796/sprite-sheet-textures-picking-up-edges-of-adjacent-texture. It should look something like this: (exaggerated borders for clarity)
Otherwise, the simpler solution is to use a different texture for each face. Webgl supports the cube texture and that is usually used the majority of the time to implement skyboxes.
Hack the uv, replace all value 1.0 with 0.999, replace all value 0 with 0.001 will fakely resolve part of this problem.
I am trying to use the Three.js library to display a large number of colored points on the screen (about half a million to million for example). I am trying to use the Canvas renderer rather than the WebGL renderer if possible (The web pages would also be displayed in the Google Earth Client bubbles, which seems to work with Canvas renderer but not the WebGL renderer.)
While I have the problem solved for a small number of points (tens of thousands) by modifying the code from here, I am having trouble scaling it beyond that.
But in the the following code using WebGL and the Particle System I can render half a million random points, but without colors.
...
var particles = new THREE.Geometry();
var pMaterial = new THREE.ParticleBasicMaterial({
color: 0xFFFFFF,
size: 1,
sizeAttenuation : false
});
// now create the individual particles
for (var p = 0; p < particleCount; p++) {
// create a particle with randon position values,
// -250 -> 250
var pX = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pY = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
pZ = Math.random() * POSITION_RANGE - (POSITION_RANGE / 2),
particle = new THREE.Vertex(
new THREE.Vector3(pX, pY, pZ)
);
// add it to the geometry
particles.vertices.push(particle);
}
var particleSystem = new THREE.ParticleSystem(
particles, pMaterial);
scene.add(particleSystem);
...
Is the reason for the better performance of the above code due to the Particle System? From what I have read in the documentation it seems the Particle System can only be used by the WebGL renderer.
So my question(s) are
a) Can I render such large number of particles using the Canvas renderer or is it always going to be slower than the WebGL/ParticleSystem version? If so, how do I go about doing that? What objects and or tricks do I use to improve performance?
b) Is there a compromise I can reach if I give up some features? In other words, can I still use the Canvas renderer for the large dataset if I give up the need to color the individual points?
c) If I have to give up the Canvas and use the WebGL version, is it possible to change the colors of the individual points? It seems the color is set by the material passed to the ParticleSystem and that sets the color for all the points.
EDIT: ParticleSystem and PointCloud has been renamed to Points. In addition, ParticleBasicMaterial and PointCloudMaterial has been renamed to PointsMaterial.
This answer only applies to versions of three.js prior to r.125.
To have a different color for each particle, you need to have a color array as a property of the geometry, and then set vertexColors to THREE.VertexColors in the material, like so:
// vertex colors
var colors = [];
for( var i = 0; i < geometry.vertices.length; i++ ) {
// random color
colors[i] = new THREE.Color();
colors[i].setHSL( Math.random(), 1.0, 0.5 );
}
geometry.colors = colors;
// material
material = new THREE.PointsMaterial( {
size: 10,
transparent: true,
opacity: 0.7,
vertexColors: THREE.VertexColors
} );
// point cloud
pointCloud = new THREE.Points( geometry, material );
Your other questions are a little too general for me to answer, and besides, it depends on exactly what you are trying to do and what your requirements are. Yes, you can expect Canvas to be slower.
EDIT: Updated for three.js r.124