WebGL rendering transparent objects. Render order - three.js

Here is my test application inthree.js- http://zheden.elitno.net/
There are 2 cubes - green is the upper one. If you uncheck "Cube 2" (yellow inner cube), it becomes invisible. And when you rotate then camera and after rotating check back "Cube 2", it becomes outer. It reproduces not with all angles of rotation.
Adding "renderer.sortObjects = false" fixed the problem. But could you please explain me the reason of this behavior? Renderer sort objects based on their positions. Why order of rendering is changed when some object is transparent? It's position is not changed.
Is this related to Transparent textures behaviour in WebGL ?
Thanks,
Zhenya

There are no transparent objects in your demo, only opaque ones. You are changing the visibility.
WebGLRenderer sorts objects based on their distance from the camera, and renders objects in the sorted order. It renders opaque objects from front to back.
The rendering order can change due to how the sorting algorithm breaks ties when two objects are the same distance from the camera.
However, the render order is not necessarily changing when you toggle the visibility off and then on again. What can be changing is the distance to the depth buffer in the least significant digits due to roundoff when you move the camera. Hence, sometimes the second object renders, and sometimes it does not.
You have two cubes of exactly the same size and orientation in exactly the same location. Do not do that. It can cause you all sorts of rendering problems -- the most common of which, is flickering.
three.js r.58

Related

Is it possible to select multiple objects with box selection?

I know that with QObjectPicker I can mouse pick a single entity. But how can I select multiple objects by drawing a rectangle on the screen?
I think this is actually pretty complicated. But here are my two cents:
If you only need to be able to select unoccluded objects
(i.e. don't need to select occluded ones) you could add a second frame graph branch to your existing one and draw each object with a unique color but to an offscreen texture. Then retrieve this texture, check which colors lie within the drawn rectangle and retrieve the corresponding objects and select them (compare to this question/answer).
I'm not sure how well this works in Qt3D because I've always had some issues with QRenderCapture. It didn't seem to have an impact where I added it in the frame graph, i.e. always captured the last state so maybe even if you have multiple render targets it might capture the wrong one etc. Qt3D is still in a pretty rough state I'd say.
If you need an example of how to render to an offscreen texture check out my example on GitHub.
If you need to be able to select occluded objects too
then it gets pretty complicated. I'm just providing some ideas here. I don't know if they will work.
If you don't have that many objects maybe you could implement the idea from above for each single object. I.e. for each object you have an offscreen frame graph branch that filters out all other objects. Then you could check each rendered texture for the rectangle drawn with the mouse. But again I'm not sure how well this works with Qt3D and if you have many objects (like in a game) it will probably crash because of the many offscreen textures.
You could also implement something like "inverse" frustum culling. In frustum culling, you omit rendering objects that lie outside the view frustum of the camera. You could compute a frustum using the rectangle coordinates drawn with the mouse. Check out the QFrustumCulling code. You would need to compute the planes differently of course, using a modified view matrix. When the user draws the rectangle, compute the frustum and check all objects. Unfortunately, this also selects objects whose bounding sphere intersects with the frustum, even though you might visible not touch any part of the object. If that bothers you, you could directly select all objects whose sphere is completely within the frustum and for all objects which only partly intersect do the intersection computation on a per-triangle basis and exit computation for the current object as soon as a triangle intersects the frustum. Depending on the number of triangles this could be very costly computational-wise.
I'd definitely stick to being able to select only unoccluded objects especially because picking in OpenGL seems to be realized by drawing the ojbects with colors these days.

THREE.JS: Render large, distant objects at correct z-indicies and still zoom into small objects

I've got a scene where I'm drawing(to scale) the earth, moon, and some spacecraft. When the moon is occluded by the earth, instead of disappearing, it is still visible (through the earth).
From my research I found that part of the problem is that the near settings for my camera were much too small, as detailed in the article linked, small values of near cause rounding in z-sorting to get fuddled for very distant objects.
The complexity here is that I need to have fine grain z-indexes for when the camera is zoomed in, to look at a spacecraft (an object with a radius of 61 meters at most, in comparison to the earth, weighing in at r =~ 6.5e+06 meters). In order to make objects on the scale of the moon and earth to render in the correct order, the near has to be at least 100,000 m at which point I cannot look at close objects.
One solution would be to reduce the scale to use kilometers, but I cannot afford to lose that precision, and prefer to use meters.
Any ideas as to how to make very large, distant objects render at the correct z Indices, while retaining scale and ability to zoom into small objects?
My Ideas (which I don't know how to implement):
Change z-buffer to include more values, and higher resolution?
Add distant objects to a "farScene" which is rendered using a "farCamera" which is controlled by the same controls used on a close-up camera?
As per #WestLangley 's answer, the solution is simply to add the optionlogarithmicDepthBuffer: true to the renderer:
this.renderer = new THREE.WebGLRenderer({antialias: true, logarithmicDepthBuffer: true});
Probably that the problem is z-test and not z-precision. this mean: z-test not apply (perhaps because that you render transparent object with alpha blending) or z-test apply with non default testing (e.g. override far instead near).
Try to render the whole scene with simple shader with no transparency in-order to make sure that transparency is not the source of the bug.
to solve the z-order without z-test you should sort the object yourself each frame to determine the order of rendering (from far to close).

Alpha Blending and face sorting using OpenGL and GLSL

I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Unexpected three.js rendering order in r61 vs. r58

The following illustrates the rendering order I would like to obtain for two plane geometries:
http://jsfiddle.net/Axy2F/8/
This works fine under r58 but under r61 the red square is obscured regardless of how I structure the scene graph. I'm unclear whether this is a bug in r61, or whether I was doing things incorrectly in r58, in a way that just happened to work.
Am I right in assuming that behind.add(child) should suffice to have the red square "beneath" the indigo one in the scene graph, and therefore rendered on top of it?
If not, what is the correct way to establish the rendering order by controlling the construction of the scene graph (that works with r61)? I would like to avoid setting renderDepth explicitly. Note that setting rendered.sortObjects to false does not help.
The object that is in front is the object that is closest to the camera. Being a child has nothing to do with it.
Both your objects have position ( 0, 0, 0 ), so they are the same distance from the camera.
This will lead to z-fighting, which is worse with CanvasRenderer than it is with WebGLRenderer.
Change the position of the child to render it in front. For example,
child.position.z = 1;
FYI, r.61 has a different tie-breaker rule than r.58 did. This is why the rendering is different in r.61.

Resources