Optimize draw calls for merged mesh with individual scaling - three.js

I have 10K cubes and the draw calls is very high, well.. 10K
You can see this here:
http://thegrook.com/three.js/merge1.html
If I combine all cubes to a single mesh, the draw calls is down to one.
You can see this here:
http://thegrook.com/three.js/merge2.html
But in the first example, I change the scaling of each cube based on the distance from the camera,
So no matter what is the zoom - the cube staying the same size on screen, and this give me the effect that I need:
The closer the camera is - the lower the density between the cubes
In the second example - this doesn't work, because the scaling affect the whole mesh and not per cube
Any idea how to achieve the density effect with less draw calls?
Thanks.
* Edited 1 *
I managed to solve this with instancing and render 250K objects and stay on 60fps
When I zooming out - the objects are overlapping - that's ok for my case.
But what's wrong is all the textures are flickering a lot..
Seem like there are no fixed drawing order for all the instancing
There is a way to fix this?
Here is an example:
http://thegrook.com/three.js/instancing3.html
* just zoom out with the mouse wheel
* Edited 2 *
Looks like If I disabling the depthWrite on the material - the problem is solved
Is this the right solution?

Related

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

How to optimize rendering and tweening of many circle at once in unity?

I have a scene in which I render and tween multiple circles (from 7 up to sometimes 50 circle sprites) in unity, but it's hammering the frame rates on mobile. How can optimize this? Can I cull circles that are fully hidden behind other circles? Is there a circle shader that i can use that's efficient? Can I achieve the same effect differently?
Here's a screenshot of what I'm talking about:

Three.js zoom to fit width of objects (ignoring height)

I have a set of block objects, and I'd like to set the perspective camera so that their entire width is fully visible (the height will be too big - that's OK, we're going to pan up and down).
I've seen there are a number of questions close to this, such as:
Adjusting camera for visible Three.js shape
Three.js - Width of view
THREE.JS: Get object size with respect to camera and object position on screen
How to Fit Camera to Object
ThreeJS. How to implement ZoomALL and make sure a given box fills the canvas area?
However, none of them seem to quite cover everything I'm looking for:
I'm not interested in the height, only the width (they won't be the same - the size will be dynamic but I can presume the height will be larger than the width)
The camera.position.z (or the FOV I guess) is the unknown, so I'm trying to get the equations round the right way to solve that
(I'm not great with 3D maths. Thanks in advance!)
I was able to simplify this problem a lot, in my case...
Since I knew the overall size of the objects, I was able to simply come up with a suitable distance through changing the camera's z position a few times and seeing what looked best.
My real problem was that the same z position gave different widths, relative to the screen width, on different sized screens - due to the different aspect ratios.
So all I did was divide my distance value by camera.aspect. Now the blocks take up the same proportion of the screen's width on all screen sizes :-)

Zoom invariant objects in webgl

WebGl doesn't support line thickness. So when I need to highlight some line, I just draw rectangle around it. But when I zoom scene it looks pretty scary.
There are two ways I see now:
1) Recalculate rectangle width according to canvas.width into model coordinates.
2) Place all zoom-invariant objects under separate matrix (I use scenejs) and recalculate their positions after each mousewheel
I don't like both of this solution. So I wonder: is there good workaround to make items zoom invariant?
Another way around that (not the most efficient one though) might be to use shaders. In our WebGL app, we render highlighted primitives into a texture and then blur it back on screen to add a "selection glow effect".

Drawing 3D in front/behind sprites in XNA/WP7?

This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.

Resources