Barycentric wireframes with full disclosure of back faces - opengl-es

I've implemented a barycentric coordinates wireframe shader something like this, and in general it is working nicely.
But like Florian Boesch's WebGL demo, some of the wire faces on the far side of the mesh are obscured (maybe has to do with the order in which the GPU is constructing faces).
I've set the following in the hopes that they would clear things up:
glCullFace(GL_NONE);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
...but no go so far. Is this possible in OpenGL ES 2.0?

I had forgotten to discard on a transparent output, so the depth buffer was being written in spite of the apparently transparent geometry, thus the mesh was self-obscuring due to depth tests failing.
This would be the problem in Florian's demo too, though it may be that he explicitly avoids discard for mobile performance reasons.

Related

Three.js: See through objects artefact on mobile

I recently tried my app on mobile and noticed some weird behavior, seems like camera near plane is clipping the geometry however other objects at the same distance aren't clipped... Materials are StandarMaterials, depthTest and depthWrite are set to true.
I must add I can't reproduce this issue on my desktop. Which makes it difficult to understand what's going on, since it's working perfectly at first sight.
Here are 2 gifs showing the problem:
You can see the same wall on the left in the next gif
Thanks!
EDIT:
It seems the transparent faces (on mobile) was due to logarithmicDepthBuffer = true (but don't know why?) and I also had additional artefacts cause by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
EDIT 2:
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries...
Well I wasn't searching for the right terms... Just found this today: https://github.com/mrdoob/three.js/issues/13047#issuecomment-356072043
So logarithmicDepthBuffer uses EXT_frag_depth which is only supported by 2% of mobiles according to WebGLStats. A workaround would be to tesselate the geometries or stay with linear depth buffer...
Additional artefacts caused by camera near and far planes being too far from each other producing depth issues (see Flickering planes)...
You don't need a logarithmic depth buffer to fix this. You've succumbed to the classic temptation to bring your near clip REALLY close to the eye and the far clip very far away. This creates a very non-linear depth precision distribution and is easily mitigated by pushing the near clip plane out by a reasonable amount. Try to sandwich your 3D data as tightly as possible between your near and far clip planes and tolerate some near plane clipping.

Transparency with complex shapes in three.js

I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!

silhouette rendering with webgl / opengl

I've been trying to render silhouettes on CAD models with webgl. The closest i got to the desired result was with fwidth and a dot between the normal and the eye vector. I found it difficult to control the width though.
I saw another web based viewer and it's capable of doing something like this:
I started digging through the shaders, and the most i could figure out is that this is analytical - an actual line entity is drawn and that the width is achieved by rendering a quad instead of default webgl lines. There is a bunch of logic in the shader and my best guess is that the vertex positions are simply updated on every render.
This is a procedural model, so i guess that for cones and cylinders, two lines can always be allocated, silhouette points computed, and the lines updated.
If that is the case, would it be a good idea to try and do something like this in the shader (maybe it's already happening and i didn't understand it). I can see a cylinder being written to attributes or uniforms and the points computed.
Is there an approach like this already documented somewhere?
edit 8/15/17
I have not found any papers or documented techniques about this. But it got a couple of votes.
Given that i do have information about cylinders and cones, my idea is to sample the normal of that parametric surface from the vertex, push the surface out by some factor that would cover some amount of pixels in screen space, stencil it, and draw a thick line thus clipping it with the actual shape of the surface.
The traditional shader-based method is Gooch shading. The original paper is here:
http://artis.imag.fr/~Cyril.Soler/DEA/NonPhotoRealisticRendering/Papers/p447-gooch.pdf
The old fashing OpenGL technique from Jeff Lander

Hooking into hidden surface removal/backface culling to swap textures in WebGL?

I want to swap textures on the faces of a rotating cube whenever they face away from the camera; detecting these faces is not entirely equivalent to, but very similar to hidden surface removal. Is it possible to hook into the builtin backface culling function/depth buffer to determine this, particularly if I want to extend this to more complex polygons?
There is a simple solution using dot products described here but I am wondering if it's possible to hook into existing functions.
I don't think you can hook into internal WebGL processing but even if it's possible it wouldn't be the best way for you. Doing this would need your GPU to switch current texture on triangle-by-triangle basis, messing up with internal caches, etc. and in general - our GPUs don't like if commands.
You can however render your mesh with first texture and face culling enabled, then set your cull face direction to opposite and render your mesh with 2nd texture. This way you'll get different texture on front- and back-facing triangles.
As I think of it - if you have a correct mesh that uses face culling you shouldn't get any difference, because you'll never see a back-face, so the only ways it could be useful is for transparent meshes or not-so-correctly closed ones, like billboards. If you want to use this approach with transparency then you'll need to carefully pick the correct rendering order.

Shadow Mapping - artifacts on thin wall orthogonal to light

I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal

Resources