Shader with alpha masking other objects - three.js

I'm trying to make a very simple shadow shader which consists of a plane with a shader showing a radial gradient on colors and alpha.
Beneath this shadow lies another plane with the same kind of shader but linear.
And as a background of all this, a linear gradient from dark blue to light blue.
The problem is that when my camera approaches the ground, the plane of the shadow masks the floor.
Why does it happen and what can I do to prevent that?
https://codesandbox.io/s/epic-sun-po9j3
https://po9j3.csb.app/

You'd need to post code to check for sure but it likely happens because three.js sorts the order it draws things based on the center of the objects and their distance from the camera.
You can force a different order by setting Object3D.renderOrder
three.js also generally draws opaque things before transparent things so my guess is your ground plane and your shadow plane are both set to transparent: true but the ground can be set to transparent: false in which case it will be drawn first.
You might find this article useful. It shows a similar example.
As for why there is a hole it's because of the depth buffer. If something in front gets drawn first then the pixels behind are not drawn. So if the shadow happens to be drawn first it ends up looking like a hole because the pixels of plane behind it are not drawn.
See this

Related

Black background for three.js sprites with depthTest true

I have been experimenting with converting the custom attribute BufferGeometry example (https://threejs.org/examples/webgl_buffergeometry_custom_attributes_particles.html) into a fly through animation and I find that the sprites have dark backgrounds (and those that I create myself as well) if depthTest is set to true. See the image.
The sprite in the custom attribute example has a transparent background but this appears to be ignored when it is rendered if depthTest is set to true.
I have tried numerous custom blending rules but cannot find a way to remove the background, only to reduce the effect a bit. The background disappears if depthTest is set to false.
Is this a known limitation? Is there a work around?
I am modifying this question to add clearer images with a different ball sprite (also with a transparent background). This image has depthTest set to true for the custom ShaderMaterial used in the https://threejs.org/examples/webgl_buffergeometry_custom_attributes_particles.html three.js example.
By comparison, this uses multiple PointsMaterials from a different three.js example (https://threejs.org/examples/webgl_points_sprites.html), also with depthTest set to true and using the PointsMaterial map parameter for the sprite.
As you can see, the second PointsMaterial example works as expected. Because PointsMaterial only accepts one fixed size and color, I need to create 36 different point geometries to render this image.
I would prefer to use the custom shader in the first example (which has custom size and color attributes and requires only one geometry). Is there a way to define a custom shader to support depthTest like the PointsMaterial does?
WebGL has a depth buffer that's used to find out which pixels are hiding behind other pixels. If a pixel is hiding behind another, it doesn't get rendered. Typically that's what you would want, why would you waste GPU time rendering pixels you're never going to see? Consider the example depthbuffer below, white squares are closer to the camera, while darker squares are further away:
If there was a square hiding behind another one, you wouldn't see it in the depthBuffer, you'd only see the closest one. The depthBuffer doesn't know that your squares have partial transparency and you want some parts to be see-through. This is an issue in other realtime engines too, this is an example from Unity:
The squares that are closer are blocking part of the squares that are further away in the depthBuffer, so the occluded pixels don't get rendered. That's why setting depthTest to false is useful. You're basically telling the renderer to stop testing which pixels are behind others, and just render all of them. Here it is again with depthTest = false:
If you must set depthTest on, (for instance, if you want the sprites to hide behind a solid wall) you could set depthWrite to false on the sprite material. It would still check which pixels are behind others, but the pixels from the sprites wouldn't get written to the depthBuffer, so no sprite could block another sprite.
depthTest: true; // Tests pixel depth
depthTest: false; // Does not test pixel depth
depthWrite: true; // Writes pixel depth to depthBuffer
depthWrite: false; // Does not write pixel depth to depthBuffer
As far as blending mode, I prefer THREE.AdditiveBlending because it gives the particles a nice glowing effect when they accumulate one on top of the other. But that's up to you.

Translucent ray of light in three.js

we are trying to create a sphere/globe where we send out rays that are translucent. at the surface the ray is solid white but as it goes further into space it becomes more and more transparent. You can see in the image. What we have is at the moment shown on the left but what we want is depicted on the right. Also, we want the effect to be like a cone of light that disperses out the further it goes into space.

Per-object post-processing

Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to 'glow' with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in
such a way that when the red, glowing cube is close to the camera,
it partially obstructs the yellow cube, and when the yellow cube is
close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
WAY 1
Render the yellow cube to the screen.
Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen
FBO just big enough (leave margins for the glow); make sure to save
the Depths to a texture.
Post-process the FBO and make the glow.
Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks
like we need to do the following:
a) render a quad , textured with the FBO's color attachment.
b) set up the ModelView matrix appropriately (
we need to move the texture by some vector because we intentionally
rendered the red cube to a smaller than the screen FBO in step 2 (for
speed reasons!)) c) in the 'merging' fragment shader, we need to write
the gl_FragDepth from FBO's Depth attachment texture (and not from
FragCoord.z)
WAY2
Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1's.
Post-process the FBO so that the marked area gets blurred and blend this to make the glow
Blit the FBO to the screen
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get 'yellowish' when we blur them, i.e. the closer, yellow cube 'creeps' into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven't tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn't give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such 'per-object post-processing' ?
EDIT:
What I am writing is a sort of OpenGL ES library for graphics effects. Clients are able to give it a series of instructions like 'take this Mesh, texture it with this, apply the following matrix transformations it its ModelView matrix, apply the following distortions to its vertices, the following set of fragment effects, render to the following Framebuffer'.
In my library, I already have what I call 'matrix effects' (modifying the Model View) 'vertex effects' (various vertex distortions) and 'fragment effects' (various changes of RGBA per-fragment).
Now I am trying to add what I call 'post-processing' effects, this 'GLOW' being the first of them. I define the effect and I vision it exactly as you described above.
The effects are applied to whole Meshes; thus now I need what I call 'per-object post-processing'.
The library is aimed mostly at kind of '2.5D' usages, like GPU-accelerated UIs in Mobile Apps, 2-2.5D games (think Candy Crush), etc. I doubt people will actually ever use it for any real 3D, large game.
So FPS, while always important, is a bit less crucial then usually.
I try really hard to keep the API 'Mesh-local', i.e. the rendering pipeline only knows about the current Mesh it is rendering. Main complaint about the above is that it has to be aware of the whole set me meshes we are going to render to a given Framebuffer. That being said, if 'mesh-locality' is impossible or cannot be done efficiently with post-processing effects, then I guess I'll have to give it up (and make my Tutorials more complicated).
Yesterday I was thinking about this:
# 'Almost-Mesh-local' algorithm for rendering N different Meshes, some of them glowing
Create FBO, attach texture the size of the screen to COLOR0, another texture 1/4 the size of the screen to COLOR1.
Enable DEPTH test, clear COLOR/DEPTH
FOREACH( glowing Mesh )
{
use MRT to render it to COLOR0 and COLOR1 in one go
}
Detach COLOR1, attach STENCIL texture
Set up STENCIL so that the test always passes and writes 1s when Depth test passes
Switch off DEPTH/COLOR writes
FOREACH( glowing Mesh )
{
enlarge it by N% (amount of GLOW needs to be modifiable!)
render to STENCIL // i.e. mark the future 'glow' regions with 1s in stencil
}
Set up STENCIL so that test always passes and writes 0 when Depth test passes
Switch on DEPTH/COLOR writes
FOREACH( not glowing Mesh )
{
render to COLOR0/STENCIL/DEPTH // now COLOR0 contains everything rendered, except for the GLOW. STENCIL marks the unobstructed glowing areas with 1s
}
Blur the COLOR1 texture with BLUR radius 'N'
Merge COLOR0 and COLOR1 to the screen in the following way:
IF ( STENCIL==0 ) take pixel from COLOR0
ELSE blend COLOR0 and COLOR1
END
This is not Mesh-local (we still need to be able to process all 'glowing' Meshes first) although I call it 'almost Mesh-local' because it differentiates between meshes only on the basis of the Effects being applied to them, and not which one is where or which obstructs which.
It also can have problems when two GLOWING Meshes obstruct each other (blend does not have to be done in the right order) although with the GLOW being half-transparent, I am hoping the final look will be more or less ok.
Looks like it can even be turned into a completely 'Mesh-local' algorithm by doing one giant
FOREACH(Mesh)
{
if( glowing )
{
}
else
{
}
}
although at a cost of having to attach and detach stuff from FBO and setting STENCILS differently at each loop iteration.
A knee-jerk suggestion is to do the hybrid:
compute where the red cube will end up on screen, so render it to an off-screen FBO just big enough (or one the same size as the screen, since creating FBOs on the hoof may not be efficient); don't worry about depths, it's only the colours you're after;
render both cubes to an off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1s;
post-process to the screen by using an original pixel from (2) wherever the stencil is 0, or a blurred pixel computed by sampling (1) wherever the stencil is 1.

Drawing transparent sprites in a 3D world in OpenGL(LWJGL)

I need to draw transparent 2D sprites in a 3D world. I tried rendering a QUAD, texturing it(using slick_util) and rotating it to face the camera, but when there are many of them the transparency doesn't really work. The sprite closest to the camera will block the ones behind it if it's rendered before them.
I think it's because OpenGL only draws the object that is closest to the viewer without checking the alpha value.
This could be fixed by sorting them from furthest away to closest but I don't know how to do that
and wouldn't I have to use math.sqrt to get the distance? (I've heard it's slow)
I wonder if there's an easy way of getting transparency in 3D to work correctly. (Enabling something in OpenGL e.g)
Disable depth testing and render transparent geometry back to front.
Or switch to additive blending and hope that looks OK.
Or use depth peeling.

Drawing Outline with OpenGL ES

Every technique that I've found or tried to render outline in OpenGL uses some function that is not avaliable on OpenGL ES...
Actually what I could do is set depthMask to false, draw the object as a 3 pixels wide line wireframe, reenable the depthMask and then drawing my object. It doesnt work for me because it outline only the external parts of my object, not the internals.
The following image shows two outlines, the left one is a correct outline, the right one is what I got.
So, can someone direct me to a technique that doesn't is avaliable on OpenGL ES?
Haven't done one of these for a while, but I think you're almost there! What I would recommend is this:
Keep depthMask enabled, but flip your backface culling to only render the "inside" of the object.
Draw the mesh with that shader that pushes all the verts out along their normals slightly and as a solid color (your outline color, probably black). Make sure that you're drawing solid triangles and not just GL_LINES.
Flip the backface culling back to normal again and re-render the mesh like usual.
The result is that the outlines will only be visible around the points on your mesh where the triangles start to turn away from the camera. This gives you some nice, simple outlines around things like noses, chins, lips, and other internal details.

Resources