Hooking into hidden surface removal/backface culling to swap textures in WebGL? - algorithm

I want to swap textures on the faces of a rotating cube whenever they face away from the camera; detecting these faces is not entirely equivalent to, but very similar to hidden surface removal. Is it possible to hook into the builtin backface culling function/depth buffer to determine this, particularly if I want to extend this to more complex polygons?
There is a simple solution using dot products described here but I am wondering if it's possible to hook into existing functions.

I don't think you can hook into internal WebGL processing but even if it's possible it wouldn't be the best way for you. Doing this would need your GPU to switch current texture on triangle-by-triangle basis, messing up with internal caches, etc. and in general - our GPUs don't like if commands.
You can however render your mesh with first texture and face culling enabled, then set your cull face direction to opposite and render your mesh with 2nd texture. This way you'll get different texture on front- and back-facing triangles.
As I think of it - if you have a correct mesh that uses face culling you shouldn't get any difference, because you'll never see a back-face, so the only ways it could be useful is for transparent meshes or not-so-correctly closed ones, like billboards. If you want to use this approach with transparency then you'll need to carefully pick the correct rendering order.

Related

Is it more performant for three.js to load a mesh that's already been triangulated than a mesh using quads?

I've read that Three.js triangulates all mesh faces, is that correct?
Then I realized that most of the gltf models I've been using have quad faces. It's very easy to triangulate faces in Blender so I'm curious if pre-triangulating the faces will result in quicker load of the mesh?
Thanks in advance, and if you have any other performance tips on three.js and gltf's (besides those listed at https://discoverthreejs.com/tips-and-tricks/) that would be super helpful!
glTF, in its current form, does not support quad faces, only triangles. Current glTF exporters (including Blender) triangulate the model when creating the glTF file. Some will automatically try to merge things back together on import.
By design, glTF stores its data in a similar manner to WebGL's vertex attributes, such that it can render efficiently, with minimal pre-processing. But there are some things you can do when creating a model, to help it reach these goals:
Combine materials when possible, to reduce the number of draw calls.
Combine meshes/primitives when possible, also to reduce draw calls.
Be aware that discontinuous normals/UVs increase vertex count (again because of vertex attributes).
Avoid creating textures filled with solid colors. Use Blender's default color/value node inputs instead.
Keep texture sizes web-friendly, and power-of-two. Mobile clients sometimes can't handle anything larger than 2048x2048. Might also try 1024x1024, etc.

Unity, fresnel shader on raw image

Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.

Transparency with complex shapes in three.js

I'm trying to render a fairly complex lamp using Three.js: https://sayduck.com/3d/xhcn
The product is split up in multiple meshes similar to this one:
The main issue is that I also need to use transparent PNG textures (in order to achieve the complex shape while keeping polygon counts low) like this:
As you can see from the live demo, this gives really weird results, especially when rotating the camera around the lamp - I believe due to z-ordering of the meshes.
I've been reading answers to similar questions on SO, like https://stackoverflow.com/a/15995475/5974754 or https://stackoverflow.com/a/37651610/5974754 to get an understanding of the underlying mechanism of how transparency is handled in Three.js and WebGL.
I think that in theory, what I need to do is, each frame, explicitly define a renderOrder for each mesh with a transparent texture (because the order based on distance to camera changes when moving around), so that Three.js knows which pixel is currently closest to the camera.
However, even ignoring for the moment that explicitly setting the order each frame seems far from trivial, I am not sure I understand how to set this order theoretically.
My meshes have fairly complex shapes and are quite intertwined, which means that from a given camera angle, some part of mesh A can be closer to the camera than some part of mesh B, while somewhere else, part of mesh B are closer.
In this situation, it seems impossible to define a closer mesh, and thus a proper renderOrder.
Have I understood correctly, and this is basically reaching the limits of what WebGL can handle?
Otherwise, if this is doable, is the approach with two render scenes (one for opaque meshes first, then one for transparent ones ordered back to front) the right one? How should I go about defining the back to front renderOrder the way that Three.js expects?
Thanks a lot for your help!

Does the GPU process invisible things?

I'm making a game in Unity 5, it's minecraft-like. For the world rendering I don't know if I should destroy cubes that I don't see or make them invisible.
My idea was to destroy them, but creating them each time they become visible would take too much processor power so I'm searching alternatives, is making them invisible a viable solution?
I'll be loading a ton of cubes at the same time, for those unfamiliar with minecraft, here is a screenshot so that you get the idea.
That is just a part of what is rendered at the same time in a tipical session.
Unity, like all graphics engines, can cause the GPU to process geometry that would not be visible on screen. The processes that try to limit this are culling and depth testing:
Frustum culling - prevents objects fully outside of the cameras viewing area (frustum) to be rendered. The viewing frustum is defined by the near and far clipping planes and the four planes connecting near and far on each side. This is always on in Unity and is defined by your cameras settings. Excluded objects will not be sent to the GPU.
Occlusion culling - prevents objects that are within the cameras view frustum but completely occluded by other objects from being rendered. This is not on by default. For information on how to enable and configure see occlusion culling in the Unity manual. Occluded objects will not be sent to the GPU.
Back face culling - prevents polygons with normals facing away from the camera from being rendered. This occurs at the shader level so the geometry IS processed by the GPU. Most shaders do cull back faces. See the Cull setting in the Unity shader docs.
Z-culling/depth testing - prevents polygons that won't be seen, due to being further away from the camera than opaque geometry that has already been rendered this frame, from being rendered. Only fully opaque (no transparency) polygons can cause this. Depth testing is also done in the shader and therefore geometry IS processed by the GPU. This process can be controlled by the ZWrite and ZTest settings described in the Unity shader docs.
On a related note, if you are using so many geometrically identical blocks make sure you are using a prefab. This allows Unity to reuse the same set of triangles rather than having 2 x 6 x thousands in your scene, thereby reducing GPU memory load.
A middle ground in between rendering the object as invisible or destroying it is to keep the C++ object but detach it from the scene graph.
This will give you all the rendering speed benefits of destroying it, but when it comes time to put it back you won't need to pay for recreation, just reattach it at the right place in the graph.

Three.js Transparency Errors With Multiple Particle Systems Sorting

I have two THREE.ParticleSystem systems with particles that have textures with alpha transparency, one is using AdditiveBlending (fire texture), the other uses NormalBlending (smoke texture) and they use simple custom vertex and fragment shaders.
Each ParticleSystem has "sortParticles = true" and independently they work perfectly, however when both types of particles overlap the first particle system (fire texture) has a similar transparency depth error that is normally associated with "sortParticles = false" (see image).
It seems that the first particle system is not rendering properly, likely because those particles are all being drawn before the others even when particles from the other system are behind them, resulting in transparency artifacts.
Perhaps one possible solution is for sortParticles to somehow sort both systems. Is this a possibility? Is there a "global particle sort flag" of some sort or a way to force sortParticles to span both systems?
Another somewhat more involved solution could be to use a single sorted ParticleSystem but somehow vary both texture and blending mode per-particle. Is this possible? I have a clue of how this could be done in the shader but I'm concerned about adding a conditional for performance reasons.
Open to any and all solutions. Thanks for any advice, ideas or help!

Resources