Render texture doesn't update changes made, how to ensure this happens? - render-to-texture

I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.

There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!

Related

Three.js: Trouble combining stencil clipping with EffectComposer

NOTE: It appears I oversimplified my initial question. See below for the edit.
I'm trying to combine the technique shown in the clipping/stencil example of Three.js, which uses the stencil buffer to render 'caps' when clipping geometry, with an EffectComposer-based rendering pipeline, but I am running into some difficulties. A fiddle demonstrating the problem can be found at https://jsfiddle.net/2vc76ajd/1/.
The EffectComposer has two passes: a RenderPass and a ShaderPass using CopyShader (see code below).
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
var shaderPass = new ShaderPass(CopyShader);
shaderPass.enabled = false;
composer.addPass(shaderPass);
The first renders the scene as usual, the latter merely copies the rendertarget onto a fullscreen quad. If I disable the ShaderPass everything works as intended: the geometry is clipped, and cutting planes are drawn in a different color:
When the ShaderPass is enabled by clicking the 'copy pass' checkbox in the upper right, however, the entire cutting plane gets rendered, rather than just the 'caps':
Presumably there is some interaction here between offscreen render targets and stencil buffers. However, I have so far been unable to find a way to have subsequent render passes look the same as the initial render. Can anyone tell me what I am missing?
EDIT: While WestLangley's answer solved my initial problem, it unfortunately doesn't work when you're using an SSAOPass, which is what I was doing before trying to simplify the problem for the question. I have posted an updated fiddle at https://jsfiddle.net/bavL98hf/1/, which includes the proposed fix and now toggles between a RenderPass or an SSAOPass. With SSAO turned on, the result is this:
I have tried setting stencilBuffer to true on all the render targets used in SSAOPass in addition to the ones in EffectComposer, but sadly that doesn't work this time. Can anyone tell me what else I am overlooking?

Three.js within web worker: Simulating animation without rendering to canvas

I have a hypothetical question:
Is it possible to simulate an animation of objects without rendering it to the canvas. I just want to capture objects' position using Vector.project(camera) and present it using CSS. And THREE.DeviceOrientationControls controls how the camera "view" the simulation.
I tried commenting THREE.WebGLRenderer, but it seems that THREE.PerpectiveCamera cannot update it's MatrixWorld property. Hence, the camera seems to not move and the Vector.project(camera) gives a static value. I do this because I need to put my three.js codes within a web worker.
Do I need still need to use THREE.WebGLRenderer to have a working simulation?
UPDATE:
I checked the following:
I digged deeper into ((three.scene.getObjectByName("one")).matrixWorld.getPosition()).project(three.camera);, I inspect the following values, having the above requirement (inside web worker, no renderer), using this example:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":1,"6":0,"7":0,"8":0,"9":0,"10":1,"11":0,"12":1.7000000476837158,"13":1,"14":0,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
But, when normal (no modification), I inspect the following:
matrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":5.4684929847717285,"13":2.1445069313049316,"14":-0.2000100016593933,"15":0}}
camera.projectionMatrix: {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":2.1445069313049316,"6":0,"7":0,"8":0,"9":0,"10":-1.000100016593933,"11":-1,"12":0,"13":0,"14":-0.2000100016593933,"15":0}}
camera.matrixWorld: {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":-1,"7":0,"8":0,"9":1,"10":-2.220446049250313e-16,"11":0,"12":-1.7000000476837158,"13":-1,"14":0,"15":1}}
matrix.getInverse(camera.matrixWorld): {"elements":{"0":1,"1":0,"2":0,"3":0,"4":0,"5":-2.220446049250313e-16,"6":1,"7":0,"8":0,"9":-1,"10":-2.220446049250313e-16,"11":0,"12":1.7000000476837158,"13":-2.220446049250313e-16,"14":1,"15":1}}
matrix.multiplyMatrices(camera.projectionMatrix, matrix.getInverse(camera.matrixWorld)): {"elements":{"0":3.2167603969573975,"1":0,"2":0,"3":0,"4":0,"5":-4.761761943205948e-16,"6":-1.000100016593933,"7":-1,"8":0,"9":-2.1445069313049316,"10":2.2206681307011713e-16,"11":2.220446049250313e-16,"12":5.4684929847717285,"13":-4.761761943205948e-16,"14":-1.2001099586486816,"15":-1}}
I noticed that the camera.matrixWorld property has significant difference in both condition. I do not understand what makes the difference.
Apparently, the following lines from THREE.WebGLRenderer.render are still needed to update camera.matrixWorld property:
scene.updateMatrixWorld();
camera.updateMatrixWorld();
camera.matrixWorldInverse.getInverse(vs._3.camera.matrixWorld);

Reset depth buffers destroyed by EffectComposer

I'm trying to render two different scenes and cameras on top of each other, like a HUD. Both render correctly when alone. Also this works as intended, so you can see mainscene under the helpscene:
renderer.render(mainscene, maincamera);
renderer.render(helpscene, helpcamera);
When I'm using EffectComposer to render the main scene, I can not see helpscene at all, I only see the results of composer rendering:
renderTargetParameters = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, stencilBuffer: false };
renderTarget = new THREE.WebGLRenderTarget( width, height, renderTargetParameters );
composer = new THREE.EffectComposer(renderer, renderTarget);
---- cut out for brevity ---
composer.render(delta);
renderer.render(helpscene, helpcamera); // has no effect whatsoever on the screen, why?
What is happening here? Again if I comment either render call out, they work correctly. But both enabled, I only see the composer scene/rendering. I would expect the helpscene to overlay (or at least overwrite) whatever is rendered before that.
I have quite complex code before renderer.render(helpscene, helpcamera);, it might take various different render paths and use effectcomposer or not based on different settings. But I want the helpscene to always take the simple route with no effects or anything, that's why I'm using a separate render call and not incorporating it as an effectcomposer pass.
EDIT: Turns out it is because some funny business with depth buffers (?). If I set material.depthTest = false to everything in the helper scene, it will show kind of correctly. It looks like the depth is set to zero or very low by some composer pass or by the composer itself, and rather unexpectedly, it will have the effect of hiding anything rendered with subsequent render calls.
Because I'm only using LineMaterial in the helper scene it will do for now, but I expect some problems further down the road with the depthTest = false workaround (might have some real shaded objects there later, which would need depth test against other objects inside the same helper scene).
So I guess the REAL QUESTION IS: how do I reset the depth buffers (or something) after EffectComposer, so that further render calls are not affected by it? I can also do the helper scene rendering as the last composer pass, does not make much difference.
I should maybe mention that one of my composer setups main RenderPass renders as a texture to a distorted plane geometry near a perspective camera created for that purpose (like the ortographic camera & quad setup found in many postprocessing examples, but with distortion). Other setup has a "normal" RenderPass with the actual scene camera, where I would expect the depth information to be such that I should see the helper scene anyway (that's probably some seriously f****ed up english, sorry, non-native speaker here and I could not come up with better words). I am having the same problem with both alternatives.
...and answering my self. After finding the real cause, it's quite simple.
renderer.clear(false, true, false); will clear the depth buffers so the overlay render works as expected :)

AS3 tile map rendering (with 1000's of tiles)

Just first off I'll say that the context here is Actionscript 3.0 (IDE: Flashbuilder) along with the Starling Framework.
So, I want to create a Tile Map that could be used for a platformer or something similar.
I want to use 8x8 pixel tiles on an 800x600 pixel stage, and the problem I am having is that I don't know how to add these 7500+ tile objects to the stage without dramatically reducing the framerate.
I've found that the drop in performance comes from adding each tile to the stage, not from initializing each Tile object.
I know I'm not giving much specific information, but what I'm asking for is if there is a standardized way to draw thousands of static objects onto the stage without a loss of performance. I feel like there is a way, and I just have yet to find it.
Update:
After all of your kind help, I have found what seems to be a great solution. At first I wanted to implement Amy's solution, using copyPixels() and draw() to make one large bitmap data for the whole map and then render that to the screen. Then, though, I wanted to know if there was a Starling equivalent to this, because everything would be so much simpler if I didn't have to mix Starling with Native Flash.
Thanks to Amy again, I looked into Starling's RenderTexture class a bit more, and found that using it's "drawBundled()" and "draw()" methods, I could easily draw all of the tiles into a RenderTexture, and then put the RenderTexture into an Image (Starling's Image Class) and then just add that Image to the screen.
That solution is a million times faster than the silly slow solutions I tried before, with flattening sprites and such. Its faster both in it's initialization time and there seems to be no drop in framerate while the renderTexture's Image is on the screen.
The one thing I want to test with this is if it is easy to update the graphics of a tile during the gameplay. Say, if water spreads from a source (or something) and a "Grass" tile had to become a "Water" tile, would the RenderTexture and it's image be able to change their appearance without some sort of lag spike or performance hiccup. I will test this out soon.
Thank you all for your help!
Don't add that many objects to the stage. Instead, create a BitmpaData the size of your stage and use copyPixels() or draw() to draw onto it. Here's an article that should get you started. You can then take the concepts you learned in that post and learn anything specific you need to do that's not covered (flashandmath.com has a lot of good tutorials about pixel manipulation)
You need to manage the tiles that need to be added and removed as you move around the game. Only add to stage tiles that are with in 800 px of the center of the screen. Once the tile is beyond 800 px from center remove it. That should keep everything moving smoothly. Good luck.
or look into drawing/coping your tiles into one bitmap. You would be basically stamping your tiles onto the new bitmap. Here is an example from adobe:
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.geom.Rectangle;
import flash.geom.Point;
var bmd1:BitmapData = new BitmapData(40, 40, false, 0x000000FF);
var bmd2:BitmapData = new BitmapData(80, 40, false, 0x0000CC44);
var rect:Rectangle = new Rectangle(0, 0, 20, 20);
var pt:Point = new Point(10, 10);
bmd2.copyPixels(bmd1, rect, pt);
var bm1:Bitmap = new Bitmap(bmd1);
this.addChild(bm1);
var bm2:Bitmap = new Bitmap(bmd2);
this.addChild(bm2);
bm2.x = 50;
More Info on the bitmapData class. I think copyPixels is what you are after.

Can I control the draw order in FX Composer?

I'm using Nvidia FX Composer to write a semi-transparent CgFX shader. Everything is fine, expect that in my render view, objects in the back of the scene are getting drawn on top of my shaded object.
here's my technique:
technique Main {
pass p0
{
DepthTestEnable = true;
DepthMask = false;
CullFaceEnable = false;
BlendEnable = true;
BlendFunc = int2(SrcAlpha, OneMinusSrcAlpha);
DepthFunc = LEqual;
VertexProgram = compile vp40 std_VS();
FragmentProgram = compile gp4fp std_PS();
}
}
If I turn on DepthMask, then objects in the back get masked out entirely, which defeats the purpose of transparency. It seems like the objects are not being drawn back-to-front. Is there a way to confirm that, and can I control the order in which FX Composer's renderer draws items to the screen?
This can't be done inside a shader, you need to change the application using it. The general rule is to draw all solid objects first, and then all transparent objects over top.
Once you've drawn a transparent object, you can't render objects behind it and expect them to be blended. OpenGL can either render it, or not render it (due to z-buffer culling).
Drawing objects back to front is usually too expensive to do in real time, as it would require re-sorting the whole scene 60 times a second!

Resources