I'm using Nvidia FX Composer to write a semi-transparent CgFX shader. Everything is fine, expect that in my render view, objects in the back of the scene are getting drawn on top of my shaded object.
here's my technique:
technique Main {
pass p0
{
DepthTestEnable = true;
DepthMask = false;
CullFaceEnable = false;
BlendEnable = true;
BlendFunc = int2(SrcAlpha, OneMinusSrcAlpha);
DepthFunc = LEqual;
VertexProgram = compile vp40 std_VS();
FragmentProgram = compile gp4fp std_PS();
}
}
If I turn on DepthMask, then objects in the back get masked out entirely, which defeats the purpose of transparency. It seems like the objects are not being drawn back-to-front. Is there a way to confirm that, and can I control the order in which FX Composer's renderer draws items to the screen?
This can't be done inside a shader, you need to change the application using it. The general rule is to draw all solid objects first, and then all transparent objects over top.
Once you've drawn a transparent object, you can't render objects behind it and expect them to be blended. OpenGL can either render it, or not render it (due to z-buffer culling).
Drawing objects back to front is usually too expensive to do in real time, as it would require re-sorting the whole scene 60 times a second!
Related
I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!
I created 1000 hidden objects with BoxGeometry geometry using THREE.JS. I set object.visible = false to hide each object, however this causes the raycasting/interaction to not work.
I expect that hiding the objects will give me a performance boost.
I can hide the box objects by setting material.visible = false on each object, however the performance of my app is still terrible.
How can I achieve the required raycasting interaction with hidden objects in the performance friendly way?
One way to achieve what you require would be to not add your Box objects to your scene which would ensure that they are not rendered, and pass those directly to a THREE.Raycaster to determine if intersection between any of those boxes has occurred.
You could for instance crate a THREE.Raycaster object from your ray primitive, and then pass an array of your Box objects to the .intersectObjects() method to determine ray intersection.
In code, that would look something like this:
// ray is your intersection primitive
const raycaster = new THREE.Raycaster(ray.origin, ray.direction);
// boxObjects is an array of THREE.Object3D's representing your 1000 boxes
const intersectionResult = raycaster.intersectObjects(boxObjects)
I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.
I am wondering how I would be able to use animated shapes inside a movieclip that would be acting as a mask?
In my Animate CC canvas file I have an instance (stripeMask) that should mask the below instance called mapAnim.
stripeMask contains shapes that are animating in.
So when the function maskIn is called, the playhead should move to the first frame inside the stripeMask clip (the one after frame 0) and animate the mask like so:
function maskIn(){
//maskAnimation to reveal image below
stripeMask.gotoAndPlay(1);
}
I love AnimateCC and it works great, but the need for creating more complex and animated masks is there and it's not easy to achieve unless I am missing something here.
Thanks!
Currently you can only use a Shape as a mask, not a Container or MovieClip.
If you want to do something more complex, you can use something like AlphaMaskFilter, but it has to be cached, and then updated every time the mask OR the content updates:
something.filters = [new createjs.AlphaMaskFilter(stripeMask)];
something cache(0,0,w,h);
// On Change
something.updateCache(); // Re-caches
The source of the AlphaMaskFilter must be an image, so you can either point to a Bitmap image, or a cacheCanvas of a mask clip you have also cached. Note that if the mask changes, the cache has to be updated as well.
This is admittedly not a fantastic solution, and we are working on other options.
I am developing an augmented reality project using Three.js and aruco-js. I made my code so that all my 3D Objects are added to the scene (empty) at the beginning but the data gets initially loaded on marker detection.
Now I want to create an interface for changing the objects appearance, starting with the possibility of scaling an object.
So I created an updateObject() function to set the new values like this:
function updateObject(object, rotation, translation)
{
...
...
...
// first method
object.scale.x = 200;
object.scale.y = 200;
object.scale.z = 200;
// second attempt
object.scale.set(300, 300, 300);
};
I tried both of the methods shown above to set the scale of my object but it has no effect to the rendered images I get. The interesting thing is that the values of the objects in my scene3d DOM object are the values I set in my function. But why doesn't it have any impact on the output?
I'm not very familiar with 3d programming in WebGL or Three.js, so if you could give me any hint where the problem might has it's origin I would really appreciate an answer.
FIX:
I took a closer look to the 3D objects I was loading and discovered that they have a children called "mesh" inside another children. By trying to change the scale of only the mesh I found out that it works this way. But I think it looks very ugly:
scene3d.children[visibleModels[0][0]+3].children[0].children[0].scale.set(2, 2, 2);
//visibleModels is a list of the markers/models that should be loaded
This is only a test for one single object to change but at least I found a way to solve this. Is this an ordinary way to change the scale of objects? If you have a better solution or anything to add feel free to contribute.
You could also try to scale the object by changing its matrix using the THREE.Matrix4.makeScale method instead:
object.matrix.makeScale( xScale, yScale, zScale );
Or the even simpler THREE.Matrix4.scale method:
object.matrix.scale( scale );