I'm adding 3 buffered plane geometries to the scene. They are 400x300, and subdivided the same amount.
Now my issue is that when I add them to the scene, my FPS drops from 60 to 1. I didn't expect such an fps drop. Are there any optimizations I can do to improve the performance? (I need the subdivisions for applying a heightmap later on)
Related
I'm trying to scale sprites to have size defined in px. regardless of camera FOV and so on. I have sizeAttenuation set to false, as I dont want them to be scaled based on distance from camera, but I struggle with setting the scale. Dont really know the conversion formula and when I hardcoded the scale with some number that's ok on one device, on the other its wrong. Any advice or help how to have the sprites with the correct sizing accross multiple devices? Thanks
Corrected answer:
Sprite size is measured in world units. Converting world units to pixel units may take a lot of calculations because it varies based on your camera's FOV, distance from camera, window height, pixel density, and so on...
To use pixel-based units, I recommend switching from THREE.Sprite to THREE.Points. It's material THREE.PointsMaterial has a size property that's measured in pixels if sizeAttenuation is set to false. Just keep in mind that it has a max size limitation based on the device's hardware, defined by gl.ALIASED_POINT_SIZE_RANGE.
My original answer continues below:
However, "1 px" is a subjective measurement nowadays because if you use renderer.setPixelRatio(window.devicePixelRatio); then you'll get different sprite sizes on different devices. For instance, MacBooks have a pixel ratio of 2 and above, some cell phones have pixel ratio of 3, and desktop monitors are usually at a ratio of 1. This can be avoided by not using setPixelRatio, or if you use it, you'll have to use a multiplication:
const s = 5;
points.size = s * window.devicePixelRatio;
Another thing to keep in mind is that sprites THREE.Points are sized in pixels, whereas meshes are sized in world units. So sometimes when you shrink your browser window vertically, the sprite Point size will remain the same, but the meshes will scale down to fit in the viewport. This means that a 5px sprite Point will take up more real-estate in a small window than it would in a large monitor. If this is the problem, make sure you use the window.innerHeight value when calculating sprite Point size.
I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html
I have 10K cubes and the draw calls is very high, well.. 10K
You can see this here:
http://thegrook.com/three.js/merge1.html
If I combine all cubes to a single mesh, the draw calls is down to one.
You can see this here:
http://thegrook.com/three.js/merge2.html
But in the first example, I change the scaling of each cube based on the distance from the camera,
So no matter what is the zoom - the cube staying the same size on screen, and this give me the effect that I need:
The closer the camera is - the lower the density between the cubes
In the second example - this doesn't work, because the scaling affect the whole mesh and not per cube
Any idea how to achieve the density effect with less draw calls?
Thanks.
* Edited 1 *
I managed to solve this with instancing and render 250K objects and stay on 60fps
When I zooming out - the objects are overlapping - that's ok for my case.
But what's wrong is all the textures are flickering a lot..
Seem like there are no fixed drawing order for all the instancing
There is a way to fix this?
Here is an example:
http://thegrook.com/three.js/instancing3.html
* just zoom out with the mouse wheel
* Edited 2 *
Looks like If I disabling the depthWrite on the material - the problem is solved
Is this the right solution?
I have a scene with a single camera and one PlaneBufferGeometry
If I make this plane size 1x1 I get 60fps
If I make this plane size 1000x1000 I get <20fps
Why does this happen? I am drawing the same number of vertices to the screen.
Here is a fiddle showing the problem
Just change the definition of size between 1 and 1000 to observe the problem.
var size = 10000;
//size = 1;
var geometry = new THREE.PlaneBufferGeometry(size, size);
I am adding 50 identical planes in this example. There isn't a significant fps hit with only one plane.
It's definitely normal. A larger plane cover more surface on the screen, thus more pixels.
More fragments are emitted by the rasterisation process. For each one, the GPU will check if it pass the depth test and/or the stencil test. If so, it will invoke the fragment shader for each pixels.
Try to zoom in your 1x1 plane, until it cover the whole screen. Your FPS will drop as well.
#pleup has a good point there, to extend on that a little bit: Even a low-end GPU will have absolutely no problem overdrawing (painting the same pixel multiple times) several times (i'd say something like 4 to 8 times) at fullscreen and still keep it up at 60 FPS. This number is likely a bit lower for webgl due to the compositing with the DOM and browser-UI, but it's still multiple times for sure.
Now what is happening is this: you are in fact creating 50 planes, and not only one. All of them with the same size in the same place. No idea why, but thats irrelevant here. As all of them are in the same place, every single pixel needs to be drawn 50 times, and worst case that is 50 times the full screen-area.
I've built a scene with a cube and a camera placed in the center of the cube. I am able to rotate the cube. Ideally I would like to limit to rotation to horizontal.
Is there any way to count the amount of rotation? For example, being able to know at every amount that I rotated 30 radians in a direction, and if I rotate backgword the amount gets updated accordingly.