Reset depth buffers destroyed by EffectComposer - three.js

I'm trying to render two different scenes and cameras on top of each other, like a HUD. Both render correctly when alone. Also this works as intended, so you can see mainscene under the helpscene:
renderer.render(mainscene, maincamera);
renderer.render(helpscene, helpcamera);
When I'm using EffectComposer to render the main scene, I can not see helpscene at all, I only see the results of composer rendering:
renderTargetParameters = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, stencilBuffer: false };
renderTarget = new THREE.WebGLRenderTarget( width, height, renderTargetParameters );
composer = new THREE.EffectComposer(renderer, renderTarget);
---- cut out for brevity ---
composer.render(delta);
renderer.render(helpscene, helpcamera); // has no effect whatsoever on the screen, why?
What is happening here? Again if I comment either render call out, they work correctly. But both enabled, I only see the composer scene/rendering. I would expect the helpscene to overlay (or at least overwrite) whatever is rendered before that.
I have quite complex code before renderer.render(helpscene, helpcamera);, it might take various different render paths and use effectcomposer or not based on different settings. But I want the helpscene to always take the simple route with no effects or anything, that's why I'm using a separate render call and not incorporating it as an effectcomposer pass.
EDIT: Turns out it is because some funny business with depth buffers (?). If I set material.depthTest = false to everything in the helper scene, it will show kind of correctly. It looks like the depth is set to zero or very low by some composer pass or by the composer itself, and rather unexpectedly, it will have the effect of hiding anything rendered with subsequent render calls.
Because I'm only using LineMaterial in the helper scene it will do for now, but I expect some problems further down the road with the depthTest = false workaround (might have some real shaded objects there later, which would need depth test against other objects inside the same helper scene).
So I guess the REAL QUESTION IS: how do I reset the depth buffers (or something) after EffectComposer, so that further render calls are not affected by it? I can also do the helper scene rendering as the last composer pass, does not make much difference.
I should maybe mention that one of my composer setups main RenderPass renders as a texture to a distorted plane geometry near a perspective camera created for that purpose (like the ortographic camera & quad setup found in many postprocessing examples, but with distortion). Other setup has a "normal" RenderPass with the actual scene camera, where I would expect the depth information to be such that I should see the helper scene anyway (that's probably some seriously f****ed up english, sorry, non-native speaker here and I could not come up with better words). I am having the same problem with both alternatives.

...and answering my self. After finding the real cause, it's quite simple.
renderer.clear(false, true, false); will clear the depth buffers so the overlay render works as expected :)

Related

Render texture doesn't update changes made, how to ensure this happens?

I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!

THREE.JS Anti-Alias not working in multi-scene set-up

What's the trick for getting anti-aliasing to work properly on smaller scenes - which are overlaid on top of big scenes?
Check out this fiddle here:
https://jsfiddle.net/gilomer88/j974zmq0/6/
When you tap on any of the cubes you see there a new smaller "detailsScene" opens up on top of the main scene - and the cube in that "detailsScene" is not looking good. (It may not look all that bad here, but trust me, in my real project I'm loading a ".glb" model and it looks really terrible there. And it's not the model that's off. I know that because when I load it into my main scene it looks 100% perfect. Unless I have to re-load it for some reason into this smaller scene...?)
Otherwise I'm pretty sure I set the renderer for this smaller scene the right way, using:
detailsRenderer.setPixelRatio( window.devicePixelRatio );
(You'll find that bit on line 192 in the JS of the fiddle code.)
Any thoughts?
Anti-aliasing is working fine. The scene is just a bit blurred, because the canvas is scaled up while the renderer renders on a smaller size. You should always set the size of the renderer, such that it matches the canvas size. Just passing the canvas element to the renderer is obviously not enough in order to let the renderer know on which size it should render the scene.
detailsRenderer.setSize(detailsCanvas.offsetWidth, detailsCanvas.offsetHeight);
https://jsfiddle.net/sg3fn0tk/

Three.js: Trouble combining stencil clipping with EffectComposer

NOTE: It appears I oversimplified my initial question. See below for the edit.
I'm trying to combine the technique shown in the clipping/stencil example of Three.js, which uses the stencil buffer to render 'caps' when clipping geometry, with an EffectComposer-based rendering pipeline, but I am running into some difficulties. A fiddle demonstrating the problem can be found at https://jsfiddle.net/2vc76ajd/1/.
The EffectComposer has two passes: a RenderPass and a ShaderPass using CopyShader (see code below).
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
var shaderPass = new ShaderPass(CopyShader);
shaderPass.enabled = false;
composer.addPass(shaderPass);
The first renders the scene as usual, the latter merely copies the rendertarget onto a fullscreen quad. If I disable the ShaderPass everything works as intended: the geometry is clipped, and cutting planes are drawn in a different color:
When the ShaderPass is enabled by clicking the 'copy pass' checkbox in the upper right, however, the entire cutting plane gets rendered, rather than just the 'caps':
Presumably there is some interaction here between offscreen render targets and stencil buffers. However, I have so far been unable to find a way to have subsequent render passes look the same as the initial render. Can anyone tell me what I am missing?
EDIT: While WestLangley's answer solved my initial problem, it unfortunately doesn't work when you're using an SSAOPass, which is what I was doing before trying to simplify the problem for the question. I have posted an updated fiddle at https://jsfiddle.net/bavL98hf/1/, which includes the proposed fix and now toggles between a RenderPass or an SSAOPass. With SSAO turned on, the result is this:
I have tried setting stencilBuffer to true on all the render targets used in SSAOPass in addition to the ones in EffectComposer, but sadly that doesn't work this time. Can anyone tell me what else I am overlooking?

Erroneous bindTexture(TEXTURE_2D, null); call, or bad shader? Texture disapparing with three.ShaderMaterial

In two cases, I have a THREE.ShaderMaterial that doesn't doesn't correctly render an object, omitting its texture.
On both examples, the middle object is a basic THREE.MeshPhongMaterial
Example1: http://jsfiddle.net/sG9MP/4/ The object that's closest to the screen never shows.
On this one, it works with renderer.render(...) but not composer.render(...).
renderer.render( scene, camera );
//composer.render();
Example2: http://jsfiddle.net/sG9MP/5/ Here I'm trying to duplicate the MeshPhongMaterial shader as a base so I can modify it. I tried to replicate it perfectly. I copied the uniform, vert, frag, and replicated what's in the object. I can't see anything different, so I don't get why it's not working the same as the standard Three.js phong shader.
So it's two cases where I'm using THREE.ShaderMaterial and it's not rendering the shader correctly, and I can't figure out why. On the second example(which is the one where I really need fixed. The first was an old test), in the webGL inspector I see the scene often looks fine until there is a "bindTexture(TEXTURE_2D, null);" call that happens under the hood by three.js. Though sometimes it just draws without it. In the first example, it's always drawing without it.
I feel like I must be missing some sort of flag in the renderer, or composer, or something. Or in my second example, where I'm trying to copy the Three.js phong shader, maybe I didn't copy something perfectly.
The goal here is just to copy the phong shader, so I can modify the uniform, vert, and frag on it. Sadly, I can't simply .clone() it since the vert and frag can't be modified after it's compiled.
It looks like while ShaderMaterial.map was being set, ShaderMaterial.uniforms.map.value was not consistently set.
I really don't understand this, though. In some cases I had issues not setting things at the top level under ShaderMaterial. Other cases I have issues not setting uniforms.
In my material, I just went and added this:
for(var k in phongMat){
if( typeof phongMat.uniforms[k] != 'undefined' ){
phongMat.uniforms[k].value = phongMat[k];
}
}

Three.js Retrieve data from WebGLRenderTarget (water sim)

I am trying to port this (http://madebyevan.com/webgl-water/‎) over to THREE. I think I'm getting close (just want the simulation for now, don't care about caustics/refraction yet). I'd like to get it working with shaders for the GPU boost.
Here's my current THREE setup using shaders: http://jsfiddle.net/EqLL9/2/
(the second smaller plane is for debugging what's currently in the WebGLRenderTarget)
What I'm struggling with is reading data back from the WebGLRenderTarget (rtTexture in my example). In the example you'll see the 4 vertices surrounding the center point are displaced upwards. This is correct (after 1 simulation step) as it starts with the center point being the only point of displacement.
If I could read the data back from the rtTexture and update the data texture (buf1) each frame, then the simulation should properly animate. How does one read the data directly from a WebGLRenderTarget? All the examples demonstrate how to send data TO the target (render to it), not read FROM it. Or am I doing it all wrong? Something's telling me I'll have to work with multiple textures and somehow swap back and forth similar to how Evan did it.
TL;DR: How can I copy data from a WebGLRenderTarget to a DataTexture after a call like this:
// render to rtTexture
renderer.render( sceneRTT, cameraRTT, rtTexture, true );
EDIT: May have found the solution at jsfiddle /gero3/UyGD8/9/
Will investigate and report back.
Ok, I figured out how to read the data using native webgl calls:
// Render first scene into texture
renderer.render( sceneRTT, cameraRTT, rtTexture, true );
// read render texture into buffer
var gl = renderer.getContext();
gl.readPixels( 0, 0, simRes, simRes, gl.RGBA, gl.UNSIGNED_BYTE, buf1.image.data );
buf1.needsUpdate = true;
The simulation now animates. However, it doesn't seem to be functioning properly (probably a dumb error I'm overlooking). It seems that the height values are never being damped and I'm not sure why. The data from buf1 is used in the fragment shader, which calculates the new height (red in RGBA), damps the value (multiplies by 0.99), then renders it to a texture. I then read this updated data from the texture back into buf1.
Here's the latest fiddle: http://jsfiddle.net/EqLL9/3/
I'll keep this updated as I progress along.
EDIT: Works great now. Just got normals implemented, and now working on environment reflection and refraction (again purely though shaders). http://relicweb.com/webgl/rt.html

Resources