Triangles are drawn out of (depth) order - three.js

I am trying to create a simple example with three.js. I have a bunch of triangles making up a volume. It seems triangles are incorrectly ordered -- or rendered in weird order. Here is the example you can see:
http://urlmin.com/qlp
just rotate the view around and see the view flickering.
The behavior is same with canvas or webgl renderer.
Please note that I am not using lights. Each triangle is colored slightly differently.
I must be missing something really simple. Let me know you think. Thanks!

JSFiddle here http://jsfiddle.net/aR8wr/
Your camera NEAR plane is very small and FAR plane very big. This results in less precision in rendering, floating point errors, and thus, flickering. Setting those variables to for example 0.01 and 1000 gets rid of the flickering.
It still looks weird from some angles. I'm not exactly sure what causes it, but it might help if you divide your model into smaller triangles. Alternatively, you can use WebGLRenderer, where it works perfectly as long as you set more sensible values to NEAR and FAR.

Related

THREE.js renderer.localClippingEnabled has bug when combined with renderer.vr

First to say, glad to see renderer.vr. I was previously using THREE.VREffect combined with THREE.VRControls. .vr is great. But...
I have some clipping planes. Standard usage: array of planes, given to the material. Mostly works fine*. I have a cube of the things.
Problem: when I use it in VR, I get a different result for my left eye and right eyes! It is off to the "left" of my head. I am guessing this is not on my end, and happenning because the positions of the clipping planes are affected by the camera position. Here's the repo in which it is happenning https://github.com/hamishtodd1/CVR and if you see what it's about you'll see why clipping planes are important for me...
*actually another thing about clipping planes: the constant term feels to me like I have to set it to the negative of what it ought to be. I'm used to this now, but just thought I'd say!

Transparency with complex shapes in three.js

I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Shadow Mapping - artifacts on thin wall orthogonal to light

I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal

Raytraced Shadows Problem

I've got a problem with shadowrays in my raytracer.
Please have a look at the following two pictures
3D sMax:
alt text http://neo.cycovery.com/shadow_problem.gif
My Raytracer:
alt text http://neo.cycovery.com/shadow_problem2.jpg
The scene is lit by a very bright light, shining from the back. It's so bright that there is no gradient in the shading, just either white or dark (due to the overexposure).
both images were rendered using 3DStudioMax and both use the exact same geometry, just in one case the normals are interpolated across the triangles.
Now consider the red dot on the surface. In the case of the unsmoothed version, it lies in a dark area. this means that the light source is not visible from this triangle, since it's facing away from it.
In the smoothed version however, it lies in the lit area, because the interpolated normal would suggest, that the light would be visible at that point (although the actual geometry of the triangle is facing away from the lightsource).
My problem now is when raytraced shadows come in. if a shadowray is shot into the scene, from the red dot, to test whether the light-source is visible or not (to determine shadowing), the shadowray will return an intersection, independent of whether normals are interpolated or not (because intersections only depend on the geometry). Therefore the pixel would be shaded dark.
3dsamx is handling the case correctly - the rendered image was generated with Raytraced shadows turned on. However, my own Raytracer runs exactly into this problem when i turn on raytraced shadows (in my raytracer, the point is dark in both cases, because raytraced shadows determine the point lying in the shadow), and i don't know how to solve it.
I hope someone knows this problem and how to deal with it..
thanks!
The 'correct' solutions are either to tesselate triangles, or to solve the equation of the surface the triangle belongs to. I have seen only the tessellation. Tessellation gives you the controllable precision and so on...
Otherwise, you should test normal in the point (what I believe '3DStudio' does) and in the case the normal is not facing the light, just set the point as not lit. It has nothing to do with 'self-shading'. Easily this problem can be solved only with tessellation. Good luck!
I'm not sure if I understood your problem correctly.
It's kind of hard to get which version/result is obtained by which method and what result you
consider correct.
Isn't it the case when you need to threat intersection of shadowray with the triangle on which
The Red Point ;-) lies as special case.
You don't do geometry intersection, as with any other triangle,
but only direction check between interpolated normal and shadowray.
Or in more general sense, you say that shadowray stops at a triangle, any triangle, when:
a) they intersect and
b) interpolated normal of a triangle at intersection point has direction opposite to shadowray.
How you can do it:
If the interpolated normal at the point is facing towards the light then the surface is potentially facing the light. If facing away you are in shadow.
In the first case, the two things that would cause a shadow are other objects, and yourself if you are concave object. In the other object that is easy.
Now in the case of yourself, when you cast your ray at the light source, if you are truly 'convex inside point' you will hit yourself twice as you enter then leave the object, thus in shadow.
If you hit yourself a single time, then you must be on the edge of where the light is striking, but as we know at that point we are facing the light (via the smoothed normals) it means we are not shadowed.
Mat, I believe your problem could be that your shadow ray hits the same triangle that it originates from because it is so close to being tangential (note that it happens right on the border of the light-shadow transition). There are two ways of solving this problem: One approach is to use a "bias" that tests the distance from the ray origin, but I think a better solution is to store a reference to the originating triangle with the ray. If you do a test against this triangle, simply ignore it.
If you happen to be using a spatial index like a BVH, then you could try stepping out of the "box" containing the triangle before doing any intersection tests, but this is less simple than before-mentioned approach and must be approached with more care.
It's normal to shift an intersection by a small amount in the direction of the light source when firing shadow rays.
I.e. shadowRay.dir = shadowRay.dir * 0.0001
This avoids the shadow ray intersecting the same primitive as the primary / reflective / refractive ray did.
Perhaps your problem is because you did not do this?
I had the same issue and in my case an if statement where if the dot prod of the light direction and the normal is positive, did the trick.
the pseudocode is:
bool shadow_hit(light, shader_ctx){
....
if(dot(light.dir, shader_ctx.normal) > 0) return false
...
return true;
}

Resources